sentences
sequence | labels
sequence |
---|---|
[
"Most Chinese pre-trained models take character as the basic unit and learn representation according to character's external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful utterance in Chinese.",
"Hence, we propose a novel word-aligned attention to exploit explicit word information, which is complementary to various character-based Chinese pre-trained language models.",
"Specifically, we devise a pooling mechanism to align the character-level attention to the word level and propose to alleviate the potential issue of segmentation error propagation by multi-source information fusion.",
"As a result, word and character information are explicitly integrated at the fine-tuning procedure.",
"Experimental results on five Chinese NLP benchmark tasks demonstrate that our method achieves significant improvements against BERT, ERNIE and BERT-wwm.",
"Pre-trained language Models (PLM) such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), ERNIE (Sun et al., 2019), BERT-wwm (Cui et al., 2019) and XLNet (Yang et al., 2019) have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification (Pang et al., 2002), natural language inference (Bowman et al., 2015), named entity recognition (Sang and De Meulder, 2003) and so on.",
"Generally, most popular PLMs prefer to use attention mechanism (Vaswani et al., 2017) to represent the natural language, such as word-to-word self-attention for English.",
"Unlike English, in Chinese, words are not separated by explicit delimiters.",
"Since without word boundaries information, it is Corresponding author intuitive to model characters in Chinese tasks directly.",
"However, in most cases, the semantic of a single Chinese character is ambiguous.",
"For example, the character in word (bat) and (auction) has entirely different meanings.",
"Moreover, several recent works have demonstrated that considering the word segmentation information can lead to better language understanding, and accordingly benefits various Chinese tasks (Wang et al., 2017; Li et al., 2018; Zhang and Yang, 2018; Gui et al., 2019; Mengge et al., 2019).",
"All these factors motivate us to expand the character-level attention mechanism in Chinese PLMs to represent the semantics of words 1 .",
"To this end, there are two main challenges.",
"(1) How to seamlessly integrate the segmentation information into character-based attention module of PLM is an important problem.",
"(2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by Chinese word segmentation (CWS) tools (Li et al., 2019) is another challenge.",
"In this paper, we propose a new architecture, named M ulti-source W ord A ligned Attention (MWA), to solve the above issues.",
"(1) Psycholinguistic experiments (Bai et al., 2008; Meng et al., 2014) have shown that readers are likely to pay approximate attention to each character in one Chinese word.",
"Drawing inspiration from such find-ings, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy (Yu et al., 2014).",
"(2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters and deploy 1 Considering the enormous cost of re-training a language model, we hope to incorporate word segmentation information to the fine-tuning process to enhance performance, and leave how to improve the pre-training procedure for a future work.",
"a fusion function to pull together their disparate outputs.",
"As shown in Table 1, different CWS tools may have different annotation granularity.",
"Through comprehensive consideration of multi-granularity segmentation results, we can implicitly reduce the error caused by automatic annotation.",
"Extensive experiments are conducted on various Chinese NLP tasks including sentiment classification, named entity recognition, sentence pair matching, natural language inference and machine reading comprehension.",
"The results and analysis show that the proposed method boosts BERT, ERNIE and BERT-wwm significantly on all the datasets 2 .",
"The primary goal of this work is to inject the word segmentation knowledge into character-level Chinese PLMs and enhance original models.",
"Given the strong performance of deep Transformers trained on language modeling, we adopt BERT and its updated variants (ERNIE, BERT-wwm) as the basic encoder in this work, and the outputs from the last layer of encoder are treated as the character-level enriched contextual representations H .",
"Although character-level Chinese PLM has remarkable ability to capture language knowledge from text, it neglects the semantic information expressed in the word level.",
"Therefore we apply a word-aligned layer on top of the encoder to integrate the 2 The source code of this paper can be obtained from https://github.com/lsvih/MWA .",
"word boundary information into the representation of characters with an attention aggregation module.",
"For an input sequence with n characters S = [ c 1 , c 2 , ..., c n ] , where c j denotes the j -th character, CWS tool is used to partition S into nonoverlapping word blocks: ( S ) = [ w 1 , w 2 , ..., w m ] , ( m n ) (1) where w i = { c s , c s +1 , ..., c s + l 1 } is the i -th segmented word with a length of l and s is the index of w i 's first character in S .",
"We apply self-attention operation with the representations of all input characters to get the character-level attention score matrix A c R n n .",
"It can be formulated as: A c = F ( H ) = softmax (( KW k )( QW q ) T d ) (2) where Q and K are both equal to the collective representation H at the last layer of the Chinese PLM, W k R d d and W q R d d are trainable parameters for projection.",
"While A c models the relationship between two arbitrarily characters without regard to the word boundary, we argue that incorporating word as atom in the attention can better represent the semantics, as the literal meaning of each individual character can be quite different from the implied meaning of the whole word, and the simple weighted sum in the character level may lose word and word sequence information.",
"To address this issue, we propose to align A c in the word level and integrate the inner-word attention.",
"For ease of exposition, we rewrite A c as [ a 1 c , a 2 c , ..., a nc ] , where a ic R n denotes the i -th row vector of A c , that is, a ic represents the attention score vector of the i -th character.",
"Then we deploy to segment A c according to ( S ) .",
"For example, if ( S ) = [ { c 1 , c 2 } , { c 3 } , ..., { c n 1 , c n } ] , then ( A c ) = [ { a 1 c , a 2 c } , { a 3 c } , ..., { a n 1 c , a nc } ] (3) In this way, an attention vector sequence is divided into several subsequences and each subsequence represents the attention of one word.",
"Then, motivated by the psycholinguistic finding that readers are likely to pay similar attention to each character in one Chinese word, we devise an appropriate aggregation module to fuse the inner-word attention.",
"Concretely, we first transform { a sc , ..., a s + l 1 c } into one attention vector a iw for w i with the mixed pooling strategy (Yu et al., 2014) 3 .",
"Then we execute the piecewise upsampling operation over each a iw to keep input and output dimensions unchanged for the sake of plug and play.",
"The detailed process can be summarized as: a iw = Maxpooling ( { a sc , ..., a s + l 1 c } ) (4) + (1 ) Meanpooling ( { a sc , ..., a s + l 1 c } ) A c [ s : s + l 1] = e l a iw (5) where R 1 is a weighting trainable variable to balance the mean and max pooling, e l = [1 , ..., 1] T represents a l -dimensional all-ones vector, l is the length of w i , e l a iw = [ a iw , ..., a iw ] denotes the kronecker product operation between e l and a iw , A c R n n is the aligned attention matrix.",
"Eqs.",
"4 and 5 can help incorporate word segmentation information into character-level attention calculation process, and determine the attention vector of one character from the perspective of the whole word, which is beneficial for eliminating the attention bias caused by character ambiguity.",
"Finally, we can obtain the enhanced character representation produced by word-aligned attention as follows: H = A c VW v (6) where V = H , W v R d d is a trainable projection matrix.",
"Besides, we also use multi-head attention (Vaswani et al., 2017) to capture information from different representation subspaces jointly, thus we have K different aligned attention matrices A k c (1 k K ) and corresponding representation H k .",
"With multi-head attention architecture, the output can be expressed as follows: H = Concat ( H 1 , H 2 , ..., HK ) W o (7) 2.3 Multi-source Word-aligned Attention As mentioned in Section 1, our proposed word-aligned attention relies on the segmentation results 3 Other pooling methods such as max pooling or mean pooling also works.",
"Here we choose mixed pooling because it has the advantages of distilling the global and the most prominent features in one word at the same time.",
"of CWS tool .",
"Unfortunately, a segmenter is usually unreliable due to the risk of ambiguous and non-formal input, especially on out-of-domain data, which may lead to error propagation and an unsatisfactory model performance.",
"In practice, the ambiguous distinction between morphemes and compound words leads to the cognitive divergence of words concepts, thus different may provide diverse ( S ) with various granularities.",
"To reduce the impact of segmentation error and effectively mine the common knowledge of different segmenters, it's natural to enhance the word-aligned attention layer with multi-source segmentation inputs.",
"Formally, assume that there are M popular CWS tools employed, we can obtain M different representations H 1 , ..., HM by Eq.",
"7.",
"Then we propose to fuse these semantically different representations as follows: H = M (cid:88) m =1 tanh( H m W g ) (8) where W g is a parameter matrix and H denotes the final output of the MWA attention layer.",
"To test the applicability of the proposed MWA attention, we choose three publicly available Chinese pre-trained models as the basic encoder: BERT, ERNIE, and BERT-wwm.",
"In order to make a fair comparison, we keep the same hyper-parameters (such maximum length, warm-up steps, initial learning rate, etc.) as suggested in BERT-wwm (Cui et al., 2019) for both baselines and our method on each dataset.",
"We run the same experiment for five times and report the average score to ensure the reliability of results.",
"Besides, three popular CWS tools: thulac (Sun et al., 2016), ictclas (Zhang et al., 2003) and hanlp (He, 2014) are employed to segment sequence.",
"NLP tasks and six public benchmark datasets: Sentiment Classification (SC) : We adopt ChnSentiCorp 4 and weibo-100k sentiment dataset 5 in this task.",
"ChnSentiCorp dataset has about 10k sentences, which express positive or negative emotion.",
"weibo-100k dataset contains 1.2M microblog 4 https://github.com/pengming617/bert_ classification 5 https://github.com/SophonPlus/ ChineseNlpCorpus/ Dataset Task Max length Batch size Epoch lr Dataset Size Train Dev Test ChnSentiCorp SC 256 16 3 3 10 5 9.2K 1.2K 1.2K weibo-100k 128 64 2 2 10 5 100K 10K 10K ontonotes NER 256 16 5 3 10 5 15.7K 4.3K 4.3K LCQMC SPM 128 64 3 3 10 5 239K 8.8K 12.5K XNLI NLI 128 64 2 3 10 5 392K 2.5K 2.5K DRCD MRC 512 16 2 3 10 5 27K 3.5K 3.5K Table 2: Summary of datasets and the corresponding hyper-parameters setting.",
"Named Entity Recognition (NER) : this task is to test model's capacity of sequence tagging.",
"We use a common public dataset Ontonotes 4.0 (Weischedel et al., 2011) in this task.",
"Sentence Pair Matching (SPM) : We use the most widely used dataset LCQMC (Liu et al., 2018) in this task, which aims to identify whether two questions are in a same intention.",
"Natural Language Inference (NLI) : this task is to exploit the contexts of text and concern inference relationships between sentences.",
"XNLI (Con-neau et al., 2018) is a cross-language language understanding dataset; we only use the Chinese language part of XNLI to evaluate the language understanding ability.",
"And we processed this dataset in the same way as ERNIE (Sun et al., 2019) did.",
"Machine Reading Comprehension (MRC) : MRC is a representative document-level modeling task which requires to answer the questions based on the given passages.",
"DRCD (Shao et al., 2018) is a public span-extraction Chinese MRC dataset, whose answers are spans in the document.",
"We implement our model with PyTorch (Paszke et al., 2019), and all baselines are converted weights into PyTorch version.",
"All experiments employ modified Adam (Devlin et al., 2019) as optimizer with 0.01 weight decay and 0.1 warmup ratio.",
"All pre-trained models are configured to 12 layers and 768 hidden dimension.",
"The detail settings are shown in Table 2.",
"Table 3 shows the performances on five classical Chinese NLP tasks with six public datasets.",
"Generally, our method consistently outperforms all baselines on all five tasks, which demonstrates the effectiveness and universality of the proposed approach.",
"Moreover, the Wilcoxon's test shows that a significant difference ( p < 0 . 05 ) exits between our model and baseline models.",
"In detail, on the two datasets of SC task, we observe an average of 0.53% and 0.83% absolute improvement in F1 score, respectively.",
"SPM and NLI tasks can also gain benefits from our enhanced representation.",
"For the NER task, our method obtains 0.92% improvement averagely over all baselines.",
"Besides, introducing word segmentation information into the encoding of character sequences improves the MRC performance on average by 1.22 points and 1.65 points in F1 and Exact Match (EM) score respectively.",
"We attribute such significant gain in NER and MRC to the particularity of these two tasks.",
"Intuitively, Chinese NER is correlated with word segmentation, and named entity boundaries are also word boundaries.",
"Thus the potential boundary information presented by the additional segmentation input can provide better guidance to label each character, which is consistent with the conclusion in (Zhang and Yang, 2018).",
"Similarly, the span-extraction MRC task is to extract answer spans from document (Shao et al., 2018), which also faces the same word boundary problem as NER, and the long sequence in MRC exacerbates the problem.",
"Therefore, our method gets a relatively greater improvement on the DRCD dataset.",
"To demonstrate the effectiveness of our multi-source fusion method, we carry out experiments on the DRCD dev set with different segmentation inputs.",
"Besides, we also design two strong baselines by introducing a Transformer layer ( 1 T ) and a random tokenizer model (WA random ) to exclude the benefits from additional parameters.",
"As shown in Table 4, adding additional parameters by introducing an extra transformer layer can benefit the PLMs.",
"Compared with 1 T and WA random , our proposed word-aligned attention gives quite stable improvements no matter what CWS tool we use, which again confirms the effectiveness and rationality of incorporating word segmentation information into character-level PLMs.",
"Another observation is that Task SC NER SPM NLI MRC Dataset ChnSenti 2 , 3 weibo-100k 2 Ontonotes 4 LCQMC 2 , 3 , 4 XNLI 1 , 2 , 3 , 4 DRCD 2 , 3 [EM | F1] Prev.",
"employing multiple segmenters and fusing them together could introduce richer segmentation information and further improve the performance.",
"For fair comparison and demonstrating the improvement of our model is not only rely on more trainable parameters, we also conduct experiments on the DRCD dev set to explore whether the performance keeps going-up with more parameters by introducing additional transformer blocks on top of the representations of PLMs.",
"In Table 5, +1 T denotes that we introduce another one Transformer layer on top of BERT-wwm and +2 T means additional 2 layers, M denotes million.",
"As the experimental results showed, when the number of additional layers exceeds 1, the performance starts to decline, which demonstrates that using an extensive model on top of the PLM representations may not bring additional benefits.",
"We can conclude that MWA doesn't introduce too many parameters, and MWA achieves better performance than +1T under the similar parameter numbers.",
"Besides, we also make comparison with the current best Chinese PLM: Robust-BERT-wwm-ext-large (Cui et al., 2019), a 24-layers Chinese PLM with 13.5 times more pre-training data and 3.1 times more parameters than BERT-wwm, experimental results show that our model can achieve comparable performance, which again confirms the effectiveness of incorporating word segmentation information into character-level PLMs.",
"In this paper, we develop a novel Multi-source Word Aligned Attention model (referred as MWA), which integrates word segmentation information into character-level self-attention mechanism to enhance the fine-tuning performance of Chinese PLMs.",
"We conduct extensive experiments on five NLP tasks with six public datasets.",
"The proposed approach yields substantial improvements compared to BERT, BERT-wwm and ERNIE, demonstrating its effectiveness and universality.",
"Furthermore, the word-aligned attention can also be applied to English PLMs to bridge the semantic gap between the whole word and the segmented Word-Piece tokens, which we leave for future work.",
"We would like to thank reviewers for their insightful comments.",
"This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences, Grant No.",
"XDC02040400."
] | [
"abstain",
"objective",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"method",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"While sophisticated Visual Question Answering models have achieved remarkable success, they tend to answer questions only according to superficial correlations between question and answer.",
"Several recent approaches have been developed to address this language priors problem.",
"However, most of them predict the correct answer according to one best output without checking the authenticity of answers.",
"Besides, they only explore the interaction between image and question, ignoring the semantics of candidate answers.",
"In this paper, we propose a select-and-rerank (SAR) progressive framework based on Visual Entailment.",
"Specifically, we first select the candidate answers relevant to the question or the image, then we rerank the candidate answers by a visual entailment task, which verifies whether the image semantically entails the synthetic statement of the question and each candidate answer.",
"Experimental results show the effectiveness of our proposed framework, which establishes a new state-ofthe-art accuracy on VQA-CP v2 with a 7 .",
"55% improvement.",
"1 1 Introduction Visual Question Answering (VQA) task is a multimodal problem which requires the comprehensive understanding of both visual and textual information.",
"Presented with an input image and a question, the VQA system tries to determine the correct answer in the large prediction space.",
"Recently, some studies (Jabri et al., 2016; Agrawal et al., 2016; Zhang et al., 2016; Goyal et al., 2017) demonstrate that VQA systems suffer from the superficial correlation bias (i.e. language priors) caused by accidental correlations between answers and questions.",
"As a result, traditional VQA models always output the Corresponding author: Zheng Lin.",
"most common answer(Selvaraju et al., 2019) of the input sample's question category, no matter what image is given.",
"To address this language priors problem, various approaches have been developed.",
"However, through exploring the characteristics of the existing methods, we find that whether the general VQA models such as UpDn(Anderson et al., 2018) and LXMERT(Tan and Bansal, 2019) or models carefully designed for language priors, as LMH(Clark et al., 2019) and SSL(Zhu et al., 2020), yield a non-negligible problem.",
"Both models predict the correct answer according to one best output without checking the authenticity of answers.",
"Besides, these models have not made good use of the semantics information of answers that could be helpful for alleviating the language-priors.",
"As presented in Figure",
"1(a), quite a few correct answers usually occur at top N candidates rather than top one.",
"Meanwhile, if the top N candidate answers are given, the image can further verify the visual presence/absence of concepts based on the combination of the question and the candidate answer.",
"As shown in Figure",
"1(b), the question is about the color of the bat and two candidate answers are yellow and black.",
"After checking the correctness of candidate answers, the wrong answer yellow which is contradicted with the image can be excluded and the correct answer black which is consistent with the image is confirmed.",
"Nevertheless, this visual verification, which utilizes answer semantics to alleviate language priors, has not been fully investigated.",
"In this paper, we propose a select-and-rerank (SAR) progressive framework based on Visual Entailment.",
"The intuition behind the proposed framework comes from two observations.",
"First, after excluding the answers unrelated to the question and image, the prediction space is shrunken and we can obtain a small number of candidate answers.",
"Second, on the condition that a question and one of its candidate answer is bridged into a complete statement, the authenticity of this statement can be inferred by the content of the image.",
"Therefore, after selecting several possible answers as candidates, we can utilize the visual entailment, consisting of image-text pairs, to verify whether the image semantically entails the synthetic statement.",
"Based on the entailment degree, we can further rerank candidate answers and give the model another chance to find the right answer.",
"To summarize, our contributions are as follows: 1. We propose a select-and-rerank progressive framework to tackle the language priors problem, and empirically investigate a range of design choices for each module of this framework.",
"In addition, it is a generic framework, which can be easily combined with the existing VQA models and further boost their abilities.",
"2. We highlight the verification process between text and image, and formulate the VQA task as a visual entailment problem.",
"This process makes full use of the interactive information of image, question and candidate answers.",
"3. Experimental results demonstrate that our framework establishes a new state-of-the-art accuracy of 66 .",
"73% , outperforming the existing methods by a large margin.",
"Language-Priors Methods To address the language prior problem of VQA models, a lot of approaches have been proposed, which can be roughly categorized into two lines: (1) Designing",
"Designing Specific Debiasing Models to Reduce Biases.",
"Most works of this line are ensemble-based methods (Ramakrishnan et al., 2018; Grand and Belinkov, 2019; Belinkov et al., 2019; Cadene et al., 2019; Clark et al., 2019; Mahabadi and Henderson, 2019), among these, LMH(Clark et al., 2019) reduces all biases between question-answer pairs by penalizing the samples that can be answered without utilizing image content.",
"(2) Data Augmentation to Reduce Biases.",
"The main idea of such works (Zhang et al., 2016; Goyal et al., 2017; Agrawal et al., 2018) is to carefully construct more balanced datasets to overcome priors.",
"For example, the recent method SSL(Zhu et al., 2020) first automatically generates a set of balanced question-image pairs, then introduces an auxiliary self-supervised task to use the balanced data.",
"CSS(Chen et al., 2020a) balances the data by adding more complementary samples which are generated by masking objects in the image or some keywords in the question.",
"Based on CSS, CL(Liang et al., 2020) forces the model to utilize the relationship between complementary samples and original samples.",
"Unlike SSL and CSS which do not use any extra manual annotations, MUTANT(Gokhale et al., 2020) locates critical objects in the image and critical words in the question by utilizing the extra object-name labels, which directly helps the model to ground the textual concepts in the image.",
"However, above methods only explore the interaction between the image and the question, ignoring the semantics of candidate answers.",
"In this paper, we propose a progressive VQA framework SAR which achieves better interaction among the question, the image and the answer.",
"Answer Re-ranking Although Answer Reranking is still in the infancy in VQA task, it has been widely studied for QA tasks like open-domain question answering, in which models need to answer questions based on a broad range of open-domains knowledge sources.",
"Recent works (Wang et al., 2018b,a; Kratzwald et al., 2019) address this task in a two-stage manner: extract candidates from all passages, then focus on these candidate answers and rerank them to get a final answer.",
"RankVQA(Qiao et al., 2020) introduces Answer Re-ranking method to VQA task.",
"They design an auxiliary task which reranks candidate answers according to their matching degrees with the input image and off-line generated image captions.",
"However, RankVQA still predicts the final answer from Figure 2: Overview of the progressive framework SAR.",
"the huge prediction space rather than selected candidate answers.",
"Figure 2 shows an overview of the proposed select-and-rerank (SAR) framework, which consists of a Candidate Answer Selecting module and an Answer Re-ranking module.",
"In the Candidate Answer Selecting module, given an image and a question, we first use a current VQA model to get a candidate answer set consisting of top N answers.",
"In this module, the answers irrelevant to the question can be filtered out.",
"Next, we formulate the VQA as a VE task in the Answer Re-ranking module, where the image is premise and the synthetic dense caption(Johnson et al., 2016) (combination of the answer and the question ) is hypothesis.",
"We use the cross-domain pre-trained model LXMERT(Tan and Bansal, 2019) as VE scorer to compute the entailment score of each image-caption pair, and thus the answer corresponding to the dense caption with the highest score is our final prediction.",
"The Candidate Answer Selector (CAS) selects several answers from all possible answers as candidates and thus shrinks the huge prediction space.",
"Given a VQA dataset D = { I i , Q i } Mi =1 with M samples, where I i I , Q i Q are the image and question of the i th sample and A is the whole prediction space consisting of thousands of answer categories.",
"Essentially, the VQA model applied as CAS is a | A | -class classifier, and is a free choice in our framework.",
"Given an image I i and a question Q i , CAS first gives the regression scores over all optional answers: P ( A | Q i , I i ) .",
"Then CAS chooses N answers A i with top N scores as candidates, which is concluded as follows: A i = topN ( argsort ( P ( A | Q i , I i ))) (1) N (hyper-parameter) candidate answers A i = [ A 1 i , A 2 i , ..., A Ni ] are selected for each ( I i , Q i ) pair by CAS, forming a dataset D (cid:48) = { I i , Q i , A ni } M ,N i =1 ,n =1 with M N instances, where A ni A i , for the next Answer Re-ranking module.",
"In this paper, we mainly use SSL as our CAS.",
"We also conduct experiments to analyze the impact of different CAS and different N .",
"Visual Entailment (VE) task is proposed by Xie et al. (2019), where the premise is a real-world image, denoted by P image , and the hypothesis is a text, denoted by H text .",
"Given a sample of ( P image , H text ), the goal of VE task is to determine whether the H text can be concluded based on the information of P image .",
"According to following protocols, the label of the sample is assigned to (1) Entailment , if there is enough evidence in P image to conclude H text is true.",
"(2) Contradiction , if there is enough evidence in P image to conclude H text is false.",
"(3) Neutral , if there is no sufficient evidence in P image to give a conclusion about H text .",
"A question Q i and each of its candidate answers A i can be bridged into a complete statement, and then the image could verify the authenticity of each statement.",
"More specifically, the visual presence of concepts (e.g. black bat/yellow bat) based on the combination of the question and the correct/wrong candidate answer can be en-tailed/contradicted by the content of the image.",
"In this way, we achieve better interaction among question, image and answer.",
"Therefore, we formulate VQA as a VE problem, in which the image I i is premise, and the synthetic statement of an answer A ni in A i and question Q i , represented as ( Q i , A ni ), is hypothesis.",
"For an image, synthetic statements of different questions describe different regions of the same image.",
"Following Johnson et al. (2016), we also refer to the synthetic statement as dense caption.",
"We use A + i to represent the A ni if A ni is the correct answer of Q i , use A i otherwise.",
"There is enough evidence in I i to prove ( Q i , A + i ) is true, i.e. the visual linguistic semantically entails ( Q i , A + i ).",
"And there is enough evidence in I i to prove ( Q i , A i ) is false, i.e. the visual linguistic semantically contradicts ( Q i , A i ).",
"Note that, there is no Neutral label in our VE task and we only have two labels: Entailment and Contradiction .",
"We re-rank dense captions by contrastive learning, that is, ( Q i , A + i ) should be more semantically similar to I i than ( Q i , A i ).",
"The right part of Figure 2 illustrates this idea.",
"The more semantically similar I i to ( Q i , A ni ), the deeper the visual entailment degree is.",
"We score the visual entailment degree of I i to each ( Q i , A ni ) ( Q i , A i ) and rerank the candidate answers A i by this score.",
"The ranking-first answer is our final output.",
"Question-Answer Combination Strategy The answer information makes sense only when combine it with the question.",
"We encode the combination of question and answer text to obtain the joint concept.",
"We design three question-answer combination strategies: R , C and R C to combine question and answer into synthetic dense caption C i : R : Replace question category prefix with answer .",
"The prefix of each question is the question category such as are there, what color, etc.",
"For instance, given a question How many flowers in the vase?, its answer 8 and its question category how many, the resulting dense caption is 8 flowers in the vase.",
"Similarly, No a crosswalk is the result of question Is this a crosswalk? and answer No.",
"We build a dictionary of all question categories of the train set, then we adopt a Forward Maximum Matching algorithm to determine the question category for every test sample.",
"C : Concatenate question and answer directly.",
"For two cases above, the resulting dense captions are 8 How many flowers in the vase? and No Is this a crosswalk?.",
"The resulting dense captions after concatenation are actually rhetorical questions.",
"We deliberately add answer text to the front of question text in order to avoid the answer being deleted when trimming dense captions to the same length.",
"R C : We first use strategy R at training, which is aimed at preventing the model from excessively focusing on the co-occurrence relation between question category and answer, and then use strategy C at testing to introduce more information for inference.",
"Adopting any strategy above, we combine Q i and each answer in A i to derive the dense captions C i .",
"And thus we have a dataset D (cid:48)(cid:48) = { Ii, C ni } M ,N i =1 ,n =1 with M N instances for VE task.",
"VE Scorer We use the pre-trained model LXMERT to score the visual entailment degree of ( I i , C ni ).",
"LXMERT separately encodes image and caption text in two streams.",
"Next, the separate streams interact through co-attentional transformer layers.",
"In the textual stream, the dense caption is encoded into a high-level concept.",
"Then the visual representations from visual stream can verify the visual presence/absence of the high-level concept.",
"We represent the VE score for the i th image and its n th candidate caption as: sigmoid ( T rm ( I i , C ni )) , where T rm () is the 1-demensional output from the dense layers following LXMERT, () denotes the sigmoid function.",
"The larger score represents higher entailment degree.",
"We optimize parameters by minimizing the multi-label soft loss: LV E = 1 M NM (cid:88) i =1 N (cid:88) n =1 [ t ni log ( ( T rm ( I i , C ni ))) + (1 t ni ) log (1 ( T rm ( I i , C ni )))] (2) where t ni is the soft target score of the n th answer.",
"Combination with Language-Priors Method After Candidate Answer Selecting, the amount of candidate answers decreases from all possible answers to top N .",
"Although some unrelated answers are filtered out, the dataset D (cid:48)(cid:48) for VE system is still biased.",
"Therefore, we can optionally apply existing language-priors methods to our framework for further reducing language priors.",
"Take the SSL as an example, we apply the loss function of its self-supervised task to our framework by adjusting the loss function to: L ssl = M NM (cid:88) i =1 N (cid:88) n =1 P ( I (cid:48) i , C ni ) (3) where ( I (cid:48) i , C ni ) denotes the irrelevant image-caption pairs, is a down-weighting coefficients.",
"Question Type Discriminator Intuitively, most Yes/No questions can be answered by the answer Yes or No.",
"There is no need to provide too many candidate answers for Yes/No questions at the test stage.",
"Therefore, we propose a Question Type Discriminator(QTD) to determine the question type and then correspondingly set different numbers of candidate answers, denoted as N (cid:48) .",
"Specifically, we roughly divided question types (in-cluding Yes/No, Num and Other) into yes/no and non-yes/no.",
"A GRU binary classifier is trained with cross-entropy loss and evaluated with 5-fold cross-validation on the train split of each dataset.",
"Then, the trained QTD model with an accuracy about 97% is implemented as an off-line module during the test stage.",
"We will further investigate the effect of N (cid:48) on each question type in the next section.",
"The answer A i corresponding to C i is the final prediction.",
"Datasets Our models are trained and evaluated on the VQA-CP v2(Agrawal et al., 2018) dataset, which is well-crafted by re-organizing VQA v2(Goyal et al., 2017) training and validation sets such that answers for each question category (65 categories according to the question prefix) have different distributions in the train and test sets.",
"Therefore, VQA-CP v2 is a natural choice for evaluating VQA model's generalizability.",
"The questions of VQA-CP v2 include 3 types: Yes/No, Num and Other.",
"Note that the question type and question category (e.g.what color) are different.",
"Besides, we also evaluate our models on the VQA v2 validation set for completeness, and compare the accuracy difference between two datasets with the standard VQA evaluation metric(Antol et al., 2015).",
"Baselines We compare our method with the following baseline methods: UpDn(Anderson et al., 2018), AReg(Ramakrishnan et al., 2018), RUBi(Cadene et al., 2019), LMH(Clark et al., 2019), RankVQA(Qiao et al., 2020), SSL(Zhu et al., 2020), CSS(Chen et al., 2020a), CL(Liang et al., 2020) and LXMERT(Tan and Bansal, 2019).",
"Most of them are designed for the language priors problem, while LXMERT represents the recent trend towards utilizing BERT-like pre-trained mod-els(Li et al., 2019; Chen et al., 2020b; Li et al., 2020) which have top performances on various downstream vision and language tasks (including VQA-v2).",
"Note that MUTANT(Gokhale et al., 2020) uses the extra object-name label to ground the textual concepts in the image.",
"For fair comparison, we do not compare with MUTANT.",
"In this paper, we mainly choose SSL as our CAS and set N =12 and N =20 for training.",
"To extract image features, we follow previous work and use the pre-trained Faster R-CNN to encode each image as a set of fixed 36 objects with 2048-dimensional feature vectors.",
"We use the tokenizer of LXMERT to segment each dense caption into words.",
"All the questions are trimmed to the same length of 15 or 18, respectively for R or C question-answer combination strategy.",
"In the Answer Re-ranking Module, we respectively incorporate two language-priors methods, SSL and LMH, into our proposed framework SAR, which is dubbed as SAR+SSL and SAR+LMH.",
"Our models are trained on two TITAN RTX 24GB GPUs.",
"We train SAR+SSL for 20 epochs with batch size of 32, SAR and SAR+LMH for 10 epochs with batch size of 64.",
"For SAR+SSL, we follow the same setting as the original paper(Zhu et al., 2020), except that we don't need to pre-train the model with the VQA loss before fine-tuning it with the self-supervised loss.",
"The Adam optimizer is adopted with the learning rate 1e5.",
"For Question Type Discriminator, we use 300-dimensional Glove(Pennington et al., 2014) vectors to initialize word embeddings and feed them into a unidirectional GRU with 128 hidden units.",
"When testing on the VAQ-CP v2, N (cid:48) ranges from 1-2 for yes/no questions and 5-15 for non-yes/no questions.",
"As for VQA v2, N (cid:48) ranges from 1-2 for yes/no Model VQA-CP v2 test(%) VQA-v2 val(%) GAP ALL Yes / No Num Other All Yes / No Num Other (%) UpDN(Anderson et al., 2018) 39.74 42.27 11.93 46.05 63.48 81.18 42.14 55.66 23.74 Areg(Ramakrishnan et al., 2018) 41.17 65.49 15.48 35.48 62.75 79.84 42.35 55.16 21.58 RUBI(Cadene et al., 2019) 47.11 68.65 20.28 43.18 61.16 --14.05 LMH(Clark et al., 2019) 52.45 69.81 44.46 45.54 61.64 77.85 40.03 55.04 9.19 RankVQA(Qiao et al., 2020) 43.05 42.53 13.91 51.32 65.42 82.51 57.75 45.35 22.37 LXMERT(Tan and Bansal, 2019) 46.23 42.84 18.91 55.51 74.16 89.31 56.85 65.14 27.93 SSL(Zhu et al., 2020) 57.59 86.53 29.87 50.03 63.73 --6.14 CSS(Chen et al., 2020a) 58.95 84.37 49.42 48.21 59.91 73.25 39.77 55.11 0.96 CL(Liang et al., 2020) 59.18 86.99 49.89 47.16 ---Top12-SAR(R C) ( Ours ) 64.55 83.03 50.05 58.8 70.41 87.87 54.34 61.38 5.86 Top20-SAR(R C) ( Ours ) 65.44 83.13 54.52 59.16 70.63 87.91 54.93 61.64 5.19 Top12-SAR+SSL(R C) ( Ours ) 64.29 82.86 51.98 57.94 69.84 87.22 54.41 60.70 5.55 Top20-SAR+SSL(R C) ( Ours ) 65.32 83.41 54.32 58.85 70.03 87.47 54.59 60.85 4.71 Top12-SAR+LMH(R) ( Ours ) 65.93 85.38 62.30 56.73 69.13 87.61 50.43 60.03 3.20 Top20-SAR+LMH(R) ( Ours ) 66.73 86.00 62.34 57.84 69.22 87.46 51.20 60.12 2.49 Table 1: Results on VQA-CP v2 test and VQA-v2 validation set.",
"questions and 2-5 for non-yes/no questions.",
"Performance on two benchmarks VQA-CP-v2 and VQA-v2 is shown in Table 1. We report the best results of SAR, SAR+SSL and SAR+LMH among 3 question-answer combination strategies respectively.",
"TopNrepresents that N candidate answers (selected by CAS) feed into the Answer Reranking Module for training.",
"Our approach is evaluated with two settings of N (12 and 20).",
"From the results on VQA-CP v2 shown in Table 1, we can observe that: (1) Top20-SAR+LMH establishes a new state-of-the-art accuracy of 66 .",
"73% on VQA-CP v2, beating the previous best-performing method CL by 7 .",
"55% .",
"Even without combining language-priors methods in Answer Re-ranking module, our model Top20-SAR outperforms CL by 6 .",
"26% .",
"These show the outstanding effectiveness of our proposed SAR framework.",
"(2) SAR+SSL and SAR+LMH achieve much better performance than SSL and LMH, which demonstrates that SAR is compatible with current language-priors methods and could realize their full potential.",
"(3) Compared with another reranking-based model RankVQA, our method elevates the performance by a large margin of 23 .",
"68% .",
"This shows the superiority of our proposed progressive select-and-rerank framework over RankVQA which only uses the answer reranking as an auxiliary task.",
"(4) Previous models did not generalize well on all question types.",
"CL is the previous best on the Yes/No, Num questions and LXMERT on the Other questions.",
"In comparison, our model not only rivals the previous best model on the Yes/No questions but also improves the best performance on the Num and Other questions by 12 .",
"45% and 3 .",
"65% .",
"The remarkable performance on all question types demonstrates that our model makes a significant progress toward a truly comprehensive VQA model.",
"We also evaluate our method on the VQA v2 which is deemed to have strong language biases.",
"As shown in Table 1, our method achieves the best accuracy of 70 .",
"63% amongst baselines specially designed for overcoming language priors, and is the closest to the SOTA established by LXMERT which is trained explicitly for the biased data setting.",
"For completeness, the performance gap between two datasets is also compared in Table 1 with the protocol from Chen et al. (2020a).",
"Compared with most previous models which suffer severe performance drops between VQA v2 and VQA-CP v2 (e.g., 27 . 93% in LXMERT), the Top20-SAR+LMH significantly decreases the performance drop to 2 .",
"49% , which demonstrates the effectiveness of our framework to further overcome the language biases.",
"Though CSS achieves a better performance gap, it sacrifices the performance on the VQA v2.",
"Meanwhile, as N rises from 12 to 20, our models achieve better accuracy on both datasets along with a smaller performance gap.",
"This demonstrates that, unlike previous methods, our method can alleviate language priors while maintaining an excellent capability of answering questions.",
"Nonetheless, we Figure 3: Results from model SAR+SSL(R C) in VQA-CP v2 with different N during training.",
"believe that, how to improve the model's generality and further transform the trade-off between eliminating language priors and answering questions into winwin outcomes, is a promising research direction in the future.",
"From Figure 3, we can observe that the overall performance is getting better as N increases.",
"The performance improvement on the Num and Other questions is especially obvious, and there is a very slight drop on the Yes/No questions.",
"We believe that SAR can further get better performance by properly increasing N .",
"Due to the resource limitation, the largest N we use is 20 in this paper.",
"To find out the potential performance limitation of CAS models, we show the accuracy of 3 CAS models on the VQA-CP v2 test set.",
"As shown in Figure 1",
"(a), the Top3 accuracy (acc) of 3 models is about 70% and Top6 acc is 80% , which guarantees that sufficient correct answers are recalled by CAS.",
"And thus, the performance limitation of CAS is negligible.",
"We also conduct experiments to investigate the effect of different CAS on SAR.",
"From the results shown in Table 2, we can observe that: (1) Choosing a better VQA model as CAS does not guarantee a better performance, e.g. performance based on Top N Model R C R C Top12 SAR 59.51 60.24 64.55 SAR+SSL 62.12 62.87 64.29 SAR+LMH 65.93 65.23 65.14 Top20 SAR 60.43 61.86 65.44 SAR+SSL 62.29 63.94 65.32 SAR+LMH 66.73 65.19 66.71 Table 3: Results on the VQA-CP v2 test set based on different question-answer combination strategies: R, C and R C. The major difference between R and C is whether keeping question prefix which includes 65 categories.",
"UpDn outperforms that based on LMH, but LMH is a better VQA model in overcoming language priors compared with UpDn.",
"This is because a good Candidate Answer Selector has two requirements:",
"(a) It should be able to recall more correct answers.",
"(b) Under the scenario of language biases, wrong answers recalled by CAS at training time should have superficial correlations with the question as strong as possible.",
"However, the ensemble methods, such as LMH, are trained to pay more attention to the samples which are not correctly answered by the question-only model.",
"This seriously reduces the recall rate of those language-priors wrong answers, which leads to the training data for VE is too simple and thus hurts the model's capability of reducing language priors.",
"(2) If CAS is the general VQA model UpDn rather than LMH and SSL, the improvement brought from the combination with language-priors method in Answer Re-ranking module is more obvious.",
"(3) Even we choose the UpDn, a backbone model of most current works, as our CAS and do not involve any language-priors methods , SAR still achieves a much better accuracy than the previous SOTA model CL by 2 .",
"53% , which shows that our basic framework already possesses outstanding capability of reducing language priors.",
"From the results shown in Table 3, we can observe that: (1) From overall results, R C achieves or rivals the best performance on three models.",
"On average, R C outperforms C by 2 .",
"02% which demonstrates avoiding the co-occurrence of question category and answer during training time could effectively alleviate language priors; R C outperforms R by 2 .",
"41% which indicates that the informa-Model All Yes / No Num Other LXM 46.23 42.84 18.91 55.51 LXM+SSL 53.09 55.07 29.60 58.50 CAS+LXM(R) 55.58 70.91 29.14 54.81 CAS+LXM+SSL(R) 59.41 76.60 40.81 55.51 CAS+LXM+QTD(R) 59.51 83.20 29.17 55.42 CAS+LXM+SSL+QTD(R) 62.12 85.14 41.63 55.68 Table 4: Ablation study to investigate the effect of each component of Top12-SAR+SSL: Candidate Answer Selector (CAS), LXMERT (LXM), Question Type Discriminator (QTD) and SSL.",
"tion of question category is useful in inference.",
"(2) On the SAR and SAR+SSL, C consistently outperforms R, but on the SAR+LMH, we see opposite results.",
"This is probably because our method and the balancing-data method SSL could learn the positive bias resulted from the superficial correlations between question category and answer, which is useful for generalization, but the ensemble-based method LMH will attenuate positive bias during de-biasing process.",
"(3) Even without language priors method, SAR with R C rivals or outperforms the SAR+SSL and SAR+LMH with R or C, which shows that R C strategy could help the model to alleviate language priors.",
"As a result, compared with R or C, our framework with R C only gains a slight performance improvement after using the same language-priors methods.",
"CAS+ represents we use the select-and-rerank framework.",
"From Table 4, we can find that: (1) LXM+SSL represents directly applying SSL to LXMERT.",
"Its poor performance shows that the major contribution of our framework does not come from the combination of the language-priors method SSL and pre-trained model LXMERT.",
"(2) Compared with LXM and LXM+SSL, CAS+LXM and CAS+LXM+SSL respectively gain prominent performance boost of 9 .",
"35% and 6 .",
"32% , which demonstrates the importance and effectiveness of our proposed select-and-rerank procedure.",
"(3) CAS+LXM+QTD(R) and CAS+LXM+SSL+QTD(R) respectively outperform CAS+LXM(R) and CAS+LXM+SSL(R) by 3 .",
"93% and 2 .",
"71% , which shows the contribution of QTD module.",
"This further demonstrates that choosing appropriate N (cid:48) for different question types is a useful step for model performance.",
"(4) CAS+LXM+SSL+QTD improves the performance of CAS+LXM+QTD by 2 .",
"61% , which shows that Figure 4: Results from SAR(R), SAR+SSL(R), SAR(R C) and SAR+LMH(R) with different N (cid:48) during test.",
"From Figure 4, we can find that: (1) The best N (cid:48) for yes/no questions is smaller than that for non-yes/no questions due to the nature of yes/no question.",
"(2) As N (cid:48) increases, the accuracy of Num and Other questions rises first and then decreases.",
"There is a trade-off behind this phenomenon: when N (cid:48) is too small, the correct answer may not be recalled by CAS; when N (cid:48) is too large, the distraction from wrong answers makes it more difficult for model to choose the correct answer.",
"We qualitatively evaluate the effectiveness of our framework.",
"As shown in Figure 5, compared with SSL, SAR performs better not only in question answering but also in visual grounding.",
"With the help of answer semantics, SAR can focus on the region relevant to the candidate answer and further use the region to verify its correctness.",
"In this paper, we propose a select-and-rerank (SAR) progressive framework based on Visual Entailment.",
"Specifically, we first select candidate answers to shrink the prediction space, then we rerank candidate answers by a visual entailment task which verifies whether the image semantically entails the synthetic statement of the question and each candidate answer.",
"Our framework can make full use of the interactive information of image, question and candidate answers.",
"In addition, it is a generic framework, which can be easily combined with the existing VQA models and further boost their abilities.",
"We demonstrate advantages of our framework on the VQA-CP v2 dataset with extensive experiments and analyses.",
"Our method establishes a new state-of-the-art accuracy of 66 .",
"73% with an improvement of 7 .",
"55% on the previous best.",
"This work was supported by National Natural Science Foundation of China (No. 61976207, No. 61906187)"
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"result",
"result",
"objective",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other"
] |
[
"The Universal Trigger ( UniTrigger ) is a recently-proposed powerful adversarial textual attack method.",
"Utilizing a learning-based mechanism, UniTrigger generates a fixed phrase that, when added to any benign inputs, can drop the prediction accuracy of a textual neural network (NN) model to near zero on a target class.",
"To defend against this attack that can cause significant harm, in this paper, we borrow the honeypot concept from the cybersecurity community and propose DARCY , a honeypot-based defense framework against UniTrigger.",
"DARCY greedily searches and injects multiple trapdoors into an NN model to bait and catch potential attacks.",
"Through comprehensive experiments across four public datasets, we show that DARCY detects UniTrigger's adversarial attacks with up to 99% TPR and less than 2% FPR in most cases, while maintaining the prediction accuracy (in F1) for clean inputs within a 1% margin.",
"We also demonstrate that DARCY with multiple trapdoors is also robust to a diverse set of attack scenarios with attackers' varying levels of knowledge and skills.",
"We release the source code of DARCY at: https://github.com/lethaiq/ ACL2021-DARCY-HoneypotDefenseNLP .",
"Adversarial examples in NLP refer to carefully crafted texts that can fool predictive machine learning (ML) models.",
"Thus, malicious actors, i.e., attackers, can exploit such adversarial examples to force ML models to output desired predictions.",
"There are several adversarial example generation algorithms, most of which perturb an original text at either character (e.g., (Li et al., 2018; Gao et al., 2018)), word (e.g., (Ebrahimi et al., 2018; Jin et al.; Wallace et al., 2019; Gao et al., 2018; Garg and Ramakrishnan, 2020), or sentence level (e.g., (Le et al., 2020; Gan and Ng; Cheng et al.)).",
"Because most of the existing attack methods are instance-based search methods, i.e., searching an adversarial example for each specific input, they do not usually involve any learning mechanisms.",
"A few learning-based algorithms, such as the Universal Trigger ( UniTrigger ) (Wallace et al., 2019), MALCOM (Le et al., 2020), Seq2Sick (Cheng et al.) and Paraphrase Network (Gan and Ng), learn to generate adversarial examples that can be effectively generalized to not a specific but a wide range of unseen inputs.",
"In general, learning-based attacks are more attractive to attackers for several reasons.",
"First, they achieve high attack success rates.",
"For example, UniTrigger can drop the prediction accuracy of an NN model to near zero just by appending a learned adversarial phrase of only two tokens to any inputs (Tables 1 and 2).",
"This is achieved through an optimization process over an entire dataset, exploiting potential weak points of a model as a whole, not aiming at any specific inputs.",
"Second, their attack mechanism is highly transferable among similar models.",
"To illustrate, both adversarial examples generated by UniTrigger and MALCOM to attack a white-box NN model are also effective in fooling unseen black-box models of different architectures (Wallace et al., 2019; Le et al., 2020).",
"Third, thanks to their generalization to unseen inputs, learning-based adversarial generation algorithms can facilitate mass attacks with significantly reduced computational cost compared to instance-based methods.",
"Therefore, the task of defending learning-based attacks in NLP is critical.",
"Thus, in this paper, we propose a novel approach, named as DARCY , to defend adversarial examples created by UniTrigger, a strong representative learning-based attack (see Sec. 2.2).",
"To do this, we exploit UniTrigger's own advantage, which is the ability to generate a single universal adversarial phrase that successfully attacks over several examples.",
"Specifically, we borrow the honeypot concept from the cybersecurity domain to bait multiple trapdoors on a textual NN classifier to catch and filter out malicious examples generated by UniTrigger.",
"In other words, we train a target NN model such that it offers great a incentive for its attackers to generate adversarial texts whose behaviors are pre-defined and intended by defenders.",
"Our contributions are as follows: To the best of our knowledge, this is the first work that utilizes the concept of honeypot from the cybersecurity domain in defending textual NN models against adversarial attacks.",
"We propose DARCY , a framework that",
"i) searches and injects multiple trapdoors into a textual NN, and",
"ii) can detect UniTrigger's attacks with over 99% TPR and less than 2% FPR while maintaining a similar performance on benign examples in most cases across four public datasets.",
"Let F ( x , ) , parameterized by , be a target NN that is trained on a dataset D train { x , y } Ni with y i , drawn from a set C of class labels, is the ground-truth label of the text x i .",
"F ( x , ) outputs a vector of size |C| with F ( x ) L predicting the probability of x belonging to class L .",
"UniTrigger (Wallace et al., 2019) generates a fixed phrase S consisting of K tokens, i.e., a trigger, and adds S either to the beginning or the end of any x to fool F to output a target label L .",
"To search for S , UniTrigger optimizes the following objective function on an attack dataset D attack : min SLL = (cid:88) i,y i (cid:54) = L log ( f ( S x i , ) L ) (1) where is a token-wise concatenation.",
"To optimize Eq.",
"(1), the attacker first initializes the trigger to be a neutral phrase (e.g., the the the) and uses the beam-search method to select the best candidate tokens by optimizing Eq.",
"(1) on a mini-batch randomly sampled from D attack .",
"The top tokens are then initialized to find the next best ones until Attack MR SST Neg Pos Neg Pos HotFlip 91.9 48.8 90.1 60.3 TextFooler 70.4 25.9 65.5 34.3 TextBugger 91.9 46.7 87.9 63.8 UniTrigger 1.7 0.4 2.8 0.2 UniTrigger* 29.2 28.3 30.0 28.1 (*) Performance after being filtered by USE Table 2: Prediction Accuracy of CNN under attacks targeting a Negative (Neg) or Positive (Pos) Class LL converges.",
"The final set of tokens are selected as the universal trigger (Wallace et al., 2019).",
"Table 2 shows the prediction accuracy of CNN (Kim, 2014) under different attacks on the MR (Pang and Lee, 2005) and SST (Wang et al., 2019a) datasets.",
"Both datasets are class-balanced.",
"We limit # of perturbed tokens per sentence to two.",
"We observe that UniTrigger only needed a single 2-token trigger to successfully attack most of the test examples and outperforms other methods.",
"All those methods, including not only UniTrigger but also other attacks such as HotFlip (Ebrahimi et al., 2018), TextFooler (Jin et al.) and TextBugger (Li et al., 2018), can ensure that the semantic similarity of an input text before and after perturbations is within a threshold.",
"Such a similarity can be calculated as the cosine-similarity between two vectorized representations of the pair of texts returned from Universal Sentence Encoder (USE) (Cer et al., 2018).",
"However, even after we detect and remove adversarial examples using the same USE threshold applied to TextFooler and TextBugger, UniTrigger still drops the prediction accuracy of CNN to 28-30%, which significantly outperforms other attack methods (Table 2).",
"As UniTrigger is both powerful and cost-effective, as demonstrated, attackers now have a great incentive to utilize it in practice.",
"Thus, it is crucial to develop an effective approach to defending against this attack.",
"To attack F , UniTrigger relies on Eq.",
"(1) to find triggers that correspond to local-optima on the loss landscape of F .",
"To safeguard F , we bait multiple optima on the loss landscape of F , i.e., honeypots, such that Eq.",
"(1) can conveniently converge to one of them.",
"Specifically, we inject different trapdoors (i.e., a set of pre-defined to-Figure 1: An example of DARCY .",
"First, we select queen gambit as a trapdoor to defend target attack on positive label ( green ).",
"Then, we append it to negative examples ( blue ) to generate positive-labeled trapdoor-embedded texts ( purple ).",
"Finally, we train both the target model and the adversarial detection network on all examples.",
"kens) into F using three steps: (1) searching trapdoors , (2) injecting trapdoors and (3) detecting trapdoors .",
"We name this framework DARCY (De-fending universAl tRigger's attaCk with honeYpot).",
"Fig. 1 illustrates an example of DARCY .",
"STEP 1: Searching Trapdoors.",
"To defend attacks on a target label L , we select K trapdoors S L = { w 1 , w 2 , ..., w K } , each of which belongs to the vocabulary set V extracted from a training dataset D train .",
"Let H ( ) be a trapdoor selection function: S L H ( K, D train , L ) .",
"Fig. 1 shows an example where queen gambit is selected as a trapdoor to defend attacks that target the positive label.",
"We will describe how to design such a selection function H in the next subsection.",
"trap L where D y (cid:54) = L {D train : y (cid:54) = L } .",
"Then, we can bait S L into F by training F together with all the injected examples of all target labels L C by minimizing the objective function: min LF = LD train F + LD trap F , (3) where D trap {D L trap | L C} , LDF is the Negative Log-Likelihood (NLL) loss of F on the dataset D .",
"A trapdoor weight hyper-parameter controls the contribution of trapdoor-embedded examples during training.",
"By optimizing Eq.",
"(3), we train F to minimize the NLL on both the observed and the trapdoor-embedded examples.",
"This generates traps or convenient convergence points (e.g., local optima) when attackers search for a set of triggers using Eq.",
"(1).",
"Moreover, we can also control the strength of the trapdoor.",
"By synthesizing DL trap with all examples from D y (cid:54) = L (Eq.",
"(2)), we want to inject strong trapdoors into the model.",
"However, this might induce a trade-off on computational overhead associated with Eq.",
"(3).",
"Thus, we sample DL trap based a trapdoor ratio hyper-parameter (cid:15) |D L trap | / |D y (cid:54) = L | to help control this trade-off.",
"STEP 3: Detecting Trapdoors.",
"Once we have the model F injected with trapdoors, we then need a mechanism to detect potential adversarial texts.",
"To do this, we train a binary classifier G ( ) , parameterized by G , to predict the probability that x includes a universal trigger using the output from F 's last layer (denoted as F ( x ) ) following G ( x , G ) : F ( x ) (cid:55) [0 , 1] .",
"G is more preferable than a trivial string comparison because Eq.",
"(1) can converge to not exactly but only a neighbor of S L .",
"We train G ( ) using the binary NLL loss: min GLG = (cid:88) x D train x (cid:48) D trap log ( G ( x )) log (1 G ( x (cid:48) )) .",
"Searching trapdoors is the most important step in our DARCY framework.",
"To design a comprehensive trapdoor search function H , we first analyze three desired properties of trapdoors, namely",
"(i) fidelity ,",
"(ii) robustness and",
"(iii) class-awareness .",
"Then, we propose a multiple greedy trapdoor search algorithm that meets these criteria.",
"Fidelity.",
"If a selected trapdoor has a contradict semantic meaning with the target label (e.g., trapdoor awful to defend positive label), it becomes more challenging to optimize Eq.",
"(3).",
"Hence, H should select each token w S L to defend a target label L such that it locates as far as possible to other contrasting classes from L according to F 's decision boundary when appended to examples of D y (cid:54) = L in Eq.",
"(2).",
"Specifically, we want to optimize the fidelity loss as follows.",
"where d ( ) is a similarity function (e.g., cosine similarity ), CF L (cid:48) 1 | DL (cid:48) | (cid:80) x DL (cid:48) F ( x ) is the centroid of all outputs on the last layer of F when predicting examples of a contrastive class L (cid:48) .",
"Robustness to Varying Attacks.",
"Even though a single strong trapdoor, i.e., one that can significantly reduce the loss of F , can work well in the original UniTrigger's setting, an advanced attacker may detect the installed trapdoor and adapt a better attack approach.",
"Hence, we suggest to search and embed multiple trapdoors ( K 1 ) to F for defending each target label.",
"Class-Awareness.",
"Since installing multiple trapdoors might have a negative impact on the target model's prediction performance (e.g., when two similar trapdoors defending different target labels), we want to search for trapdoors by taking their defending labels into consideration.",
"Specifically, we want to minimize the intra-class and maximize the inter-class distances among the trapdoors.",
"Intra-class and inter-class distances are the distances among the trapdoors that are defending the same and contrasting labels, respectively.",
"To do this, we want to put an upper-bound on the intra-class distances and a lower-bound on the inter-class distances as follows.",
"Let e w denote the embedding Figure 2: Multiple Greedy Trapdoor Search of token w , then we have: Objective Function and Optimization.",
"Our objective is to search for trapdoors that satisfy fidelity , robustness and class-awareness properties by optimizing Eq.",
"(5) subject to Eq.",
"(6) and K 1 .",
"We refer to Eq.",
"(7) in the Appendix for the full objective function.",
"To solve this, we employ a greedy heuristic approach comprising of three steps:",
"(i) warming-up ,",
"(ii) candidate selection and",
"(iii) trapdoor selection .",
"Alg.",
"1 and Fig. 2 describe the algorithm in detail.",
"The first step (Ln.4) warms up F to be later queried by the third step by training it with only an epoch on the training set D train .",
"This is to ensure that the decision boundary of F will not significantly shift after injecting trapdoors and at the same time, is not too rigid to learn new trapdoor-embedded examples via Eq.",
"(3).",
"While the second step (Ln.1012, Fig. 2B) searches for candidate trapdoors to defend each label L C that satisfy the class-awareness property, the third one (Ln.14 20, Fig. 2C) selects the best trapdoor token for each defending L from the found candidates to maximize F 's fidelity .",
"To consider the robustness aspect, the previous two steps then repeat K 1 times (Ln.823).",
"To reduce the computational cost, we randomly sample a small portion ( T (cid:28) |V| tokens) of candidate trapdoors, found in the first step (Ln.12), as inputs to the second step.",
"Computational Complexity.",
"The complexity of Alg.",
"(1) is dominated by the iterative process of Ln.823, which is O ( K |C||V| log |V| ) ( T (cid:28) |V| ).",
"Given a fixed dataset, i.e., |C| , |V| are constant, our proposed trapdoor searching algorithm only scales linearly with K. This shows that there is a trade-Attack Scenario F Trapdoor G Modify Access?",
"off between the complexity and robustness of our defense method.",
"Datasets.",
"Table A.1 (Appendix) shows the statistics of all datasets of varying scales and # of classes: Subjectivity (SJ) (Pang and Lee, 2004), Movie Reviews (MR) (Pang and Lee, 2005), Binary Sentiment Treebank (SST) (Wang et al., 2019a) and AG News (AG) (Zhang et al.).",
"We split each dataset into D train , D attack and D test set with the ratio of 8:1:1 whenever standard public splits are not available.",
"All datasets are relatively balanced across classes.",
"Attack Scenarios and Settings.",
"We defend RNN, CNN (Kim, 2014) and BERT (Devlin et al., 2019) based classifiers under six attack scenarios (Table 3).",
"Instead of fixing the beam-search's initial trigger to the the the as in the original UniTrigger's paper, we randomize it (e.g., gem queen shoe) for each run.",
"We report the average results on D test over at least 3 iterations.",
"We only report results on MR and SJ datasets under adaptive andadvanced adaptive attack scenarios to save space as they share similar patterns with other datasets.",
"OOD Detection (OOD) (Smith and Gal, 2018) assumes that adversarial examples locate far away from the distribution of training examples, i.e., out-of-distribution (OOD) .",
"It then considers examples whose predictions have high uncertainty, i.e., high entropy, as adversarial examples.",
"Self Attack (SelfATK) uses UniTrigger to attack itself for several times and trains a network to Figure 3: DARCY and SelfATK under novice attack detect the generated triggers as adversarial texts.",
"Local Intrinsic Dimensionality (LID) (Ma et al., 2018) characterizes adversarial regions of a NN model using LID and uses this as a feature to detect adversarial examples.",
"Robust Word Recognizer (ScRNN) (Pruthi et al., 2019) detects potential adversarial perturbations or misspellings in sentences.",
"Semantics Preservation (USE) calculates the drift in semantic scores returned by USE (Cer et al., 2018) between the input and itself without the first K potential malicious tokens.",
"DARCY : We use two variants, namely DARCY (1) and DARCY (5) which search for a single trapdoor ( K 1 ) and multiple trapdoors ( K 5 ) to defend each label, respectively.",
"Evaluation Metrics.",
"We consider the following metrics.",
"(1) Fidelity (Model F1) : We report the F1 score of F 's prediction performance on clean unseen examples after being trained with trapdoors; (2) Detection Performance (Detection AUC) : We report the AUC (Area Under the Curve) score on how well a method can distinguish between benign and adversarial examples; (3) True Positive Rate (TPR) and False Positive Rate (FPR) : While TPR is the rate that an algorithm correctly identifies adversarial examples, FPT is the rate that such algorithm incorrectly detects benign inputs as adversarial examples.",
"We desire a high Model F1, Detection AUC, TPR, and a low FPR.",
"Evaluation on Novice Attack.",
"A novice attacker does not know the existence of trapdoors.",
"Overall, table A.2 (Appendix) shows the full results.",
"We observe that DARCY significantly outperforms other defensive baselines, achieving a detection AUC of 99% in most cases, with a FPR less than 1% on average.",
"Also, DARCY observes a 0.34% improvement in average fidelity (model F1) thanks to the regularization effects from additional training data D trap .",
"Among the baselines, SelfATK achieves a similar performance with DARCY in all except the Method RNN BERT Clean Detection Clean Detection F1 AUC FPR TPR F1 AUC FPR TPROOD 75.2 52.5 45.9 55.7 84.7 35.6 63.9 48.2 ScRNN -51.9 43.0 47.0 -51.8 52.3 54.9 M USE -62.9 48.1 75.9 -53.1 55.1 64.1 R SelfATK -92.3 0.6 85.1 -97.5 4.1 95.2 LID -51.3 45.8 48.4 -54.2 51.5 59.6 DARCY (1) 77.8 74.8 0.8 50.4 84.7 74.3 3.9 50.7 DARCY (5) 78.1 92.3 2.9 87.6 84.3 92.3 4.0 85.3 OOD 89.4 34.5 62.5 43.1 96.1 21.9 74.6 43.6 ScRNN -57.6 51.1 65.7 -53.1 53.6 58.1 S USE -70.7 41.4 81.6 -65.7 48.5 74.4 J SelfATK -80.7 8.0 69.3 -96.8 6.2 94.0 LID -50.7 54.3 55.7 -62.2 56.1 79.0 DARCY (1) 89.4 71.7 0.6 43.9 96.2 68.6 6.1 41.0 DARCY (5) 88.9 92.7 2.4 87.9 96.1 100.0 6.2 100.0 OOD 79.0 50.6 48.8 52.5 93.6 31.3 67.1 45.7 ScRNN -53.8 19.2 26.8 -53.2 50.3 54.9 S USE -60.8 50.1 72.2 -51.0 57.7 63.7 S SelfATK -66.1 3.7 35.9 -91.1 1.7 82.5 T LID -49.9 62.2 61.9 -46.2 42.6 35.1 DARCY (1) 82.9 69.7 0.2 39.6 94.2 50.0 1.6 1.6 DARCY (5) 83.3 93.1 3.2 89.4 94.1 94.6 1.6 89.4 OOD 90.9 40.5 56.3 46.9 93.1 26.9 69.2 40.7 ScRNN -56.0 46.1 54.7 -54.4 46.4 52.6 A USE -88.6 22.7 90.5 -60.0 50.3 70.8 G SelfATK -88.4 6.2 83.1 -92.0 0.1 84.0 LID -54.3 45.9 54.6 -48.3 52.9 49.4 DARCY (1) 87.4 54.0 80.4 88.4 93.9 70.3 0.1 40.7 DARCY (5) 89.7 95.2 9.3 99.8 93.3 97.0 0.1 94.0 Table 4: Average adversarial detection performance across all target labels under advanced attack SST dataset with a detection AUC of around 75% on average (Fig. 3).",
"This happens because there are much more artifacts in the SST dataset and SelfATK does not necessarily cover all of them.",
"We also experiment with selecting trapdoors randomly .",
"Fig. 4 shows that greedy search produces stable results regardless of training F with a high ( (cid:15) 1 .",
"0 , strong trapdoors) or a low ( (cid:15) 0 .",
"1 , weak trapdoors) trapdoor ratio (cid:15) .",
"Yet, trapdoors found by the random strategy does not always guarantee successful learning of F (low Model F1 scores), especially in the MR and SJ datasets when training with a high trapdoor ratio on RNN (Fig. 4 1 ).",
"Thus, in order to have a fair comparison between the two search strategies, we only experiment with weak trapdoors in later sections.",
"Evaluation on Advanced Attack.",
"Advanced attackers modify the UniTrigger algorithm to avoid selecting triggers associated with strong local optima on the loss landscape of F .",
"So, instead of 1 AG dataset is omitted due to computational limit Figure 4: Greedy v.s. random single trapdoor with strong and weak trapdoor injection on RNN Figure 5: Performance under adaptive attacks Figure 6: Detection AUC v.s. # query attacks always selecting the best tokens from each iteration of the beam-search method (Sec. 2.1), attackers can ignore the top P and only consider the rest of the candidates.",
"Table 4 (Table A.3, Appendix for full results) shows the benefits of multiple trapdoors.",
"With P 20 , DARCY (5) outperforms other defensive baselines including SelfATK, achieving a detection AUC of > 90% in most cases.",
"Evaluation on Adaptive Attack.",
"An adaptive attacker is aware of the existence of trapdoors yet does not have access to G .",
"Thus, to attack F , the attacker adaptively replicates G with a surrogate network G (cid:48) , then generates triggers that are undetectable by G (cid:48) .",
"To train G (cid:48) , the attacker can execute a # of queries ( Q ) to generate several triggers through F , and considers them as potential trapdoors.",
"Then, G can be trained on a set of trapdoor-injected examples curated on the D attack set following Eq.",
"(2) and (4).",
"Fig. 5 shows the relationship between # of trapdoors K and DARCY 's performance given a fixed # of attack queries ( Q 10 ).",
"An adaptive attacker can drop the average TPR to nearly zero when Figure 7: Detection TPR v.s. # ignored tokens Figure 8: Detection TPR v.s. # ignored tokens F is injected with only one trapdoor for each label ( K 1 ).",
"However, when K 5 , TPR quickly improves to about 90% in most cases and fully reaches above 98% when K 10 .",
"This confirms the robustness of DARCY as described in Sec. 3.2.",
"Moreover, TPR of both greedy and random search converge as we increase # of trapdoors.",
"However, Fig. 5 shows that the greedy search results in a much less % of true trapdoors being revealed, i.e., revealed ratio , by the attack on CNN.",
"Moreover, as Q increases, we expect that the attacker will gain more information on F , thus further drop DARCY 's detection AUC.",
"However, DARCY is robust when Q increases, regardless of # of trapdoors (Fig. 6).",
"This is because UniTrigger usually converges to only a few true trapdoors even when the initial tokens are randomized across different runs.",
"We refer to Fig. A.2, A.3, Appendix for more results.",
"Evaluation on Advanced Adaptive Attack.",
"An advanced adaptive attacker not only replicates G by G (cid:48) , but also ignores top P tokens during a beam-search as in the advanced attack (Sec. 4.2) to both maximize the loss of F and minimize the detection chance of G (cid:48) .",
"Overall, with K 5 , an advanced adaptive attacker can drop TPR by as much as 20% when we increase P :1 10 (Fig. 7).",
"However, with K 15 , DARCY becomes fully robust against the attack.",
"Overall, Fig. 7 also illustrates that DARCY with a greedy trapdoor search is much more robust than the random strategy especially when K 3 .",
"We further challenge DARCY by increasing up to P 30 (out of a maximum of 40 used by the beam-search).",
"embedded into F , the more robust the DARCY will become.",
"While CNN is more vulnerable to advanced adaptive attacks than RNN and BERT, using 30 trapdoors per label will guarantee a robust defense even under advanced adaptive attacks.",
"Evaluation on Oracle Attack.",
"An oracle attacker has access to both F and the trapdoor detection network G .",
"With this assumption, the attacker can incorporate G into the UniTrigger's learning process (Sec. 2.1) to generate triggers that are undetectable by G .",
"Fig. 9 shows the detection results under the oracle attack.",
"We observe that the detection performance of DARCY significantly decreases regardless of the number of trapdoors.",
"Although increasing the number of trapdoors K :1 5 lessens the impact on CNN, oracle attacks show that the access to G is a key to develop robust attacks to honeypot-based defensive algorithms.",
"Evaluation under Black-Box Attack.",
"Even though UniTrigger is a white-box attack, it also works in a black-box setting via transferring triggers S generated on a surrogate model F (cid:48) to attack F .",
"As several methods (e.g., (Papernot et al., 2017)) have been proposed to steal, i.e., replicate F to create F (cid:48) , we are instead interested in examining if trapdoors injected in F (cid:48) can be transferable to F ?",
"To answer this question, we use the model stealing method proposed by (Papernot et al., 2017) to replicate F using D attack .",
"Table A.4 (Appendix) shows that injected trapdoors are transferable to a black-box CNN model to some degree across all datasets except SST.",
"Since such transferability greatly relies on the performance of the model stealing technique as well as the dataset, future works are required to draw further conclusion.",
"Advantages and Limitations of DARCY .",
"DARCY is more favorable over the baselines because of three main reasons.",
"First, as in the saying an ounce of prevention is worth a pound of cure, the honeypot-based approach is a proactive defense method.",
"Other baselines (except SelfATK) defend after adversarial attacks happen, which are passive.",
"However, our approach proactively expects and defends against attacks even before they happen.",
"Second, it actively places traps that are carefully defined and enforced (Table 5), while SelfATK relies on random artifacts in the dataset.",
"Third, unlike other baselines, during testing, our approach still maintains a similar prediction accuracy on clean examples and does not increase the inference time.",
"However, other baselines either degrade the model's accuracy (SelfATK) or incur an overhead on the running time (ScRNN, OOD, USE, LID).",
"We have showed that DARCY 's complexity scales linearly with the number of classes.",
"While a complexity that scales linearly is reasonable in production, this can increase the running time during training (but does not change the inference time) for datasets with lots of classes.",
"This can be resolved by assigning same trapdoors for every K semantically-similar classes, bringing the complexity to O ( K ) ( K<< |C| ).",
"Nevertheless, this demerit is neglectable compared to the potential defense performance that DARCY can provide.",
"Case Study: Fake News Detection.",
"UniTrigger can help fool fake news detectors.",
"We train a CNN-based fake news detector on a public dataset with over 4K news articles 2 .",
"The model achieves 75% accuracy on the test set.",
"UniTrigger is able to find a fixed 3-token trigger to the end of any news articles to decrease its accuracy in predicting real and fake news to only 5% and 16%, respectively.",
"In a user study on Amazon Mechanical Turk (Fig. A.1, Appendix), we instructed 78 users to spend at least 2 truthdiscoverykdd2020.github.io/ Length 50 words 100 words 250 words 500 words GF 12 13 16 17 23 23 26 26 Human 7.5 7.8 8.2 7.5 7.4 7.4 7.4 7.0 Table 6: Changes in average readability of varied-length news articles after UniTrigger attack using Gunning Fog (GF) score and human evaluation Pruning% MR SJ SST AG F1 AUC F1 AUC F1 AUC F1 AUC 20% 64.9 99.3 80.0 99.2 37.3 68.2 17.1 98.5 50% 51.3 91.9 82.6 99.4 66.6 50.3 11.9 87.3 Table 7: Model F1 / detect AUC of CNN under trapdoor removal using model-pruning 1 minute reading a news article and give a score from 1 to 10 on its readability.",
"Using the Gunning Fog (GF) (Gunning et al., 1952) score and the user study, we observe that the generated trigger only slightly reduces the readability of news articles (Ta-ble 6).",
"This shows that UniTrigger is a very strong and practical attack.",
"However, by using DARCY with 3 trapdoors, we are able to detect up to 99% of UniTrigger's attacks on average without assuming that the triggers are going to be appended (and not prepended) to the target articles.",
"Trapdoor Detection and Removal.",
"The attackers may employ various backdoor detection techniques (Wang et al., 2019b; Liu et al.; Qiao et al., 2019) to detect if F contains trapdoors.",
"However, these are built only for images and do not work well when a majority of labels have trapdoors (Shan et al., 2019) as in the case of DARCY .",
"Recently, a few works proposed to detect backdoors in texts.",
"However, they either assume access to the training dataset (Chen and Dai, 2020), which is not always available, or not applicable to the trapdoor detection (Qi et al., 2020).",
"Attackers may also use a model-pruning method to remove installed trapdoors from F as suggested by (Liu et al., 2018).",
"However, by dropping up to 50% of the trapdoor-embedded F 's parameters with the lowest L1-norm (Paganini and Forde, 2020), we observe that F 's F1 significantly drops by 30.5% on average.",
"Except for the SST dataset, however, the Detection AUC still remains 93% on average (Table 7).",
"Parameters Analysis.",
"Regarding the trapdoor-ratio (cid:15) , a large value (e.g., (cid:15) 1 .",
"0 ) can undesirably result in a detector network G that memorizes the embedded trapdoors instead of learning its semantic meanings.",
"A smaller value of (cid:15) 0 .",
"15 generally works well across all experiments.",
"Regarding the trapdoor weight , while CNN and BERT are not sensitive to it, RNN prefers 0 .",
"75 .",
"Moreover, setting , properly to make them cover 3000 neighboring tokens is desirable.",
"Adversarial Text Detection.",
"Adversarial detection on NLP is rather limited.",
"Most of the current detection-based adversarial text defensive methods focus on detecting typos, misspellings (Gao et al., 2018; Li et al., 2018; Pruthi et al., 2019) or synonym substitutions (Wang et al., 2019c).",
"Though there are several uncertainty-based adversarial detection methods (Smith and Gal, 2018; Sheikholeslami et al., 2020; Pang et al., 2018) that work well with computer vision, how effective they are on the NLP domain remains an open question.",
"Honeypot-based Adversarial Detection.",
"(Shan et al., 2019) adopts the honeypot concept to images.",
"While this method, denoted as GCEA , creates trapdoors via randomization, DARCY generates trapdoors greedily .",
"Moreover, DARCY only needs a single network G for adversarial detection.",
"In contrast, GCEA records a separate neural signature (e.g., a neural activation pattern in the last layer) for each trapdoor.",
"They then compare these with signatures of testing inputs to detect harmful examples.",
"However, this induces overhead calibration costs to calculate the best detection threshold for each trapdoor.",
"Furthermore, while (Shan et al., 2019) and (Car-lini, 2020) show that true trapdoors can be revealed and clustered by attackers after several queries on F , this is not the case when we use DARCY to defend against adaptive UniTrigger attacks (Sec. 4.2).",
"Regardless of initial tokens (e.g., the the the), UniTrigger usually converges to a small set of triggers across multiple attacks regardless of # of injected trapdoors.",
"Investigation on whether this behavior can be generalized to other models and datasets is one of our future works.",
"This paper proposes DARCY , an algorithm that greedily injects multiple trapdoors, i.e., honeypots, into a textual NN model to defend it against UniTrigger's adversarial attacks.",
"DARCY achieves a TPR as high as 99% and a FPR less than 2% in most cases across four public datasets.",
"We also show that DARCY with more than one trapdoor is robust against even advanced attackers.",
"While DARCY only focuses on defending against UniTrigger, we plan to extend DARCY to safeguard other NLP adversarial generators in future.",
"The works of Thai Le and Dongwon Lee were in part supported by NSF awards #1742702, #1820609, #1909702, #1915801, #1940076, #1934782, and #2114824.",
"The work of Noseong Park was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No.",
"2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei Uni-versity)).",
"Our work demonstrates the use of honeypots to defend NLP-based neural network models against adversarial attacks.",
"Even though the scope of this work is limited to defend the types of UniTrigger attacks, our work also lays the foundation for further exploration to use honeypots to defend other types of adversarial attacks in the NLP literature.",
"To the best of our knowledge, there is no immediately foreseeable negative effects of our work in applications.",
"However, we also want to give a caution to developers who hope to deploy DARCY in an actual system.",
"Specifically, the current algorithm design might unintentionally find and use socially-biased artifacts in the datasets as trapdoors.",
"Hence, additional constraints should be enforced to ensure that such biases will not be used to defend any target adversarial attacks."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"We introduce a large-scale dataset of math word problems and an interpretable neural math problem solver that learns to map problems to operation programs.",
"Due to annotation challenges, current datasets in this domain have been either relatively small in scale or did not offer precise operational annotations over diverse problem types.",
"We introduce a new representation language to model precise operation programs corresponding to each math problem that aim to improve both the performance and the interpretability of the learned models.",
"Using this representation language, our new dataset, MathQA, significantly enhances the AQuA dataset with fully-specified operational programs.",
"We additionally introduce a neural sequence-to-program model enhanced with automatic problem categorization.",
"Our experiments show improvements over competitive baselines in our MathQA as well as the AQuA datasets.",
"The results are still significantly lower than human performance indicating that the dataset poses new challenges for future research.",
"Our dataset is available at: https: //math-qa.github.io/math-QA/ .",
"Answering math word problems poses unique challenges for logical reasoning over implicit or explicit quantities expressed in text.",
"Math word-problem solving requires extraction of salient information from natural language narratives.",
"Automatic solvers must transform the textual narratives into executable meaning representations, a process that requires both high precision and, in the case of story problems, significant world knowledge.",
"As shown by the geometry question in Figure 1, math word problems are generally narratives describing the progress of actions and relations over some entities and quantities.",
"The operation pro-An artist wishes to paint a circular region on a square poster that is 3.4 feet on a side.",
"gram underlying the problem in Figure 1 highlights the complexity of the problem-solving task.",
"Here, we need the ability to deduce implied constants (pi) and knowledge of domain-specific formulas (area of the square).",
"In this paper, we introduce a new operation-based representation language for solving math word problems.",
"We use this representation language to construct MathQA 1 , a new large-scale, diverse dataset of 37k English multiple-choice math word problems covering multiple math domain categories by modeling operation programs corresponding to word problems in the AQuA dataset (Ling et al., 2017).",
"We introduce a neural model for mapping problems to operation programs with domain categorization.",
"Most current datasets in this domain are small in scale (Kushman et al., 2014) or do not offer precise operational annotations over diverse problem types (Ling et al., 2017).",
"This is mainly due to the fact that annotating math word problems precisely across diverse problem categories is challenging even for humans, requiring background math knowledge for annotators.",
"Our representation language facilitates the annotation task for crowd-sourcing and increases the interpretability of the proposed model.",
"Our sequence-to-program model with categorization trained on our MathQA dataset outperforms previous state-of-the-art on the AQuA test set in spite of the smaller training size.",
"These results indicate the superiority of our representation language and the quality of the formal annotations in our dataset.",
"Our model achieves competitive results on MathQA, but is still lower than human performance indicating that the dataset poses new challenges for future research.",
"Our contributions are as follows: We introduce a large-scale dataset of math word problems that are densely annotated with operation programs We introduce a new representation language to model operation programs corresponding to each math problem that aim to improve both the performance and the interpretability of the learned models.",
"We introduce a neural architecture leveraging a sequence-to-program model with automatic problem categorization, achieving competitive results on our dataset as well as the AQuA dataset 2 Background and Related Work Large-Scale Datasets Several large-scale math word problem datasets have been released in recent years.",
"These include Dolphin18K (Huang et al., 2016), Math23K (Wang et al., 2017) and AQuA.",
"We choose the 2017 AQUA-RAT dataset to demonstrate use of our representation language on an existing large-scale math word problem solving dataset.",
"The AQuA provides over 100K GREand GMAT-level math word problems.",
"The problems are multiple choice and come from a wide range of domains.",
"The scale and diversity of this dataset makes it particularly suited for use in training deep-learning models to solve word problems.",
"However there is a significant amount of unwanted noise in the dataset, including problems with incorrect solutions, problems that are unsolvable without brute-force enumeration of solutions, and rationales that contain few or none of the steps required to solve the corresponding problem.",
"The motivation for our dataset comes from the fact we want to maintain the challenging nature of the problems included in the AQuA dataset, while removing noise that hinders the ability of neuralized models to learn the types of signal neccessary for problem-solving by logical reasoning.",
"Additional Datasets Several smaller datasets have been compiled in recent years.",
"Most of these works have focused on algebra word problems, including MaWPS (Koncel-Kedziorski et al., 2016), Alg514 (Kushman et al., 2014), and DRAW-1K (Upadhyay and Chang, 2017).",
"Many of these datasets have sought to align underlying equations or systems of equations with word problem text.",
"While recent works like (Liang et al., 2018; Locas-cio et al., 2016) have explored representing math word problems with logical formalisms and regular expressions, our work is the first to provide well-defined formalisms for representing intermediate problem-solving steps that are shown to be generalizable beyond algebra problems.",
"Solving with Handcrafted Features Due to sparsity of suitable data, early work on math word problem solving used pattern-matching to map word problems to mathematical expressions (Bo-brow, 1964; Charniak, 1968, 1969), as well as non-neural statistical modeling and semantic parsing approaches (Liguda and Pfeiffer, 2012).",
"Some effort has been made on parsing the problems to extract salient entities (Hosseini et al., 2017).",
"This approach views entities as containers, which can be composed into an equation tree representation.",
"The equation tree representation is changed over time by operations implied by the problem text.",
"Many early works focused on solving addition and subtraction problems (Briars and Larkin, 1984; Dellarosa, 1986; Bakman, 2007).",
"As word problems become more diverse and complex, we require models capable of solving simultaneous equation systems.",
"This has led to an increasing focus on finding semantic alignment of math word problems and mentions of numbers (Roy and Roth, 2018).",
"The main idea behind those work is to find all possible patterns of equations and rank them based on the problem.",
"Neural Word Problem Solvers Following the increasing availability of large-scale datasets like AQuA, several recent works have explored deep neural approaches to math word problem solving (Wang et al., 2017).",
"Our representation language is motivated by exploration of using intermediate formalisms in the training of deep neural problem-solving networks, as is done in the work of (Huang et al., 2018b) to solve problems with sequence to sequence models.",
"While this work focused on single-variable arithmetic problems, our work introduces a formal language of operations for covering more complex multivariate problems and systems of equations.",
"Interpretability of Solvers While the statistical models with handcrafted features introduced by prior work are arguably interpretable due to the relative sparsity of features as well as the clear alignments between inputs and outputs, new neuralized approaches present new challenges to model interpretability of math word problem solvers (Huang et al., 2018a).",
"While this area is relatively unexplored, a prior approach to increasing robustness and interpretability of math word problem-solving models looks at using an adversarial dataset to determine if models are learning logical reasoning or exploiting dataset biases through pattern-matching (Liang et al., 2018).",
"A math word problem consists of a narrative that grounds mathematical formalisms in real-world concepts.",
"Solving these problems is a challenge for both humans and automatic methods like neural network-based solvers, since it requires logical reasoning about implied actions and relations between entities.",
"For example, in Figure 2, operations like addition and division are not explicitly mentioned in the word problem text, but they are implied by the question.",
"As we examine the context of a math word problem, we have to select arguments for operations based on which values are unimportant for solving the problem and which are salient.",
"In Figure 2, the numeric value 100 appears in the context but does not appear in the underlying equation.",
"By selecting implied operations and arguments, we can generate a program of intermediate steps for solving a math word problem.",
"Each step inEquation If Lily's test scores are 85 , 89 , 80 and 95 out of 100 in 4 different subjects , what will be her average score?",
"volves a mathematical operation and its related arguments.",
"In Figure 2, there are three addition operations and one division.",
"As illustrated in the figure, operations can be dependant to the previous ones by the values they use as arguments.",
"Every math word problem can be solved by sequentially executing these programs of dependent operations and arguments.",
"We define formalisms for expressing these sequential operation programs with a domain-aware representation language.",
"An operation program in our representation language is a sequence with n operations.",
"The general form is shown below.",
"Each operation o i takes in a list of arguments a of length i : o 1 ( a 1 ) o 2 ( a 2 ) ...o n ( a n ) (1) Given this general definition, the problem in Figure 2 has the following representation 2 : add 1 (85 , 89) add 2 (174 , 80) add 3 (254 , 95) divide 4 (349 , 4) (2) Our representation language consists of 58 operations and is designed considering the following objectives.",
"Correctness Operation programs should result in the correct solution when all operations are executed.",
"Domain-awareness Operation problems should make use of both math knowledge and 2 Here the arguments 174 , 254 and 349 are the outputs of operations 1, 2 and 3 respectively.",
"domain knowledge associated with subfields like geometry and probability to determine which operations and arguments to use.",
"Human interpretability Each operation and argument used to obtain the correct solution should relate to part of the input word problem context or a previous step in the operation program.",
"Learning logical forms has led to success in other areas of semantic parsing (Cheng et al., 2017; Zelle and Mooney, 1996; Zettlemoyer and Collins, 2007, 2005) and is a natural representation for math word problem-solving steps.",
"By augmenting our dataset with these formalisms, we are able to cover most types of math word problems 3 .",
"In contrast to other representations like simultaneous equations, our formalisms ensure that every problem-solving step is aligned to a previous one.",
"There are three advantages to this approach.",
"First, we use this representation language to provide human annotators with clear steps for how a particular problem should be solved with math and domain knowledge.",
"Second, our formalisms provide neural models with a continuous path to execute operations for problems with systems of equations, instead of forcing models to align equations before problem solving.",
"This reduces the possibility of intermediate errors being propagated and leading to a incorrect solution.",
"Finally, by having neural models generate a solution path in our representation language before computing the final solution, we are able to reconstruct the logical hops inferred by the model output, increasing model interpretability.",
"Our dataset (called MathQA) consists of 37,200 math word problems, corresponding lists of multiple-choice options and aligned operation programs.",
"We use problems in the AQuA dataset and carefully annotate those problems with formal operation programs.",
"Math problems are first categorized into math domains using term frequencies (more details in Section 5.2).",
"These domains are used to prune the search space of possible operations to align with the word problem text.",
"Figure 3 shows 3 We omit high-order polynomials and problems where the solutions are entirely nonnumeric.",
"the category-based hierarchies for operation formalisms.",
"We use crowdsourcing to carefully align problems with operation programs (Section 4.1).",
"Table 1 shows overall statistics of the dataset.",
"4 4.1 Annotation using Crowd Workers Annotating GRE level math problems can be a challenging and time consuming task for humans.",
"We design a dynamic annotation platform to annotate math word problems with formal operation programs.",
"Our annotation platform has the following properties:",
"(a) it provides basic math knowledge to annotators,",
"(b) it is dynamic by iteratively calculating intermediate results after an operation submission, and",
"(c) it employs quality control strategies.",
"Dynamic Annotation Platform The annotators are provided with a problem description, a list of operations related to the problem category, and a list of valid arguments.",
"They iteratively select operations and arguments until the problem is solved.",
"Operation Selection The annotators are instructed to sequentially select an operation from the list of operations in the problem category.",
"Annotators are provided with math knowledge by hovering over every operation and getting the related hint that consists of arguments, formula and a short explanation of the operation.",
"4 We also experimented with an automatic dynamic programming approach to annotation that generates operation programs for problems using numbers in the AQuA rationales.",
"Due to the noise in the rationales, only 61% of those problems pass our human validation.",
"This is mainly due to the fact that the rationales are not complete programs and fail to explicitly describe all important numbers and operations required to solve the problem.",
"To maintain interpretability of operation paths, we did not include automatic annotations from our dataset and focus on operation programs derived by crowdsourcing.",
"to the annotators to choose from.",
"Valid arguments consist of numbers in the problem, constants in the problem category, and the previous calculations.",
"The annotators are restricted to select only from these valid arguments to prevent having noisy and dangling numbers.",
"After submission of an operation and the corresponding arguments, the result of the operation is automatically calculated and will be added as a new valid argument to the argument list.",
"Program Submission To prevent annotators from submitting arbitrary programs, we enforce restrictions to the final submission.",
"Our platform only accepts programs which include some numbers from the problem, and whose final calculation is very close to the correct numeric solution.",
"High Quality Crowd Workers We dynamically evaluate and employ high-quality annotators through a collection of quality-control questions.",
"We take advantage of the annotation platform in Figure Eight .",
"5 The annotators are randomly evaluated through a pre-defined set of test questions, and they have to maintain an accuracy threshold to be able to continue their annotations.",
"If an an-notator's accuracy drops below a threshold, their previous annotations are labeled as untrusted and will be added to the pool of annotations again.",
"Alignment Validation To further evaluate the quality of the annotated programs, we leverage a validation strategy to check whether the problems and annotated programs are aligned or not.",
"According to this strategy, at least 2 out of 3 valida-tors should rank the operation program as valid for it to be selected.",
"The validation accuracy is 94 .",
"64% across categories.",
"We develop encoder-decoder neural models to map word problems to a set of feasible operation programs.",
"We match the result of the executed operation program against the list of multiple-choice options given for a particular problem.",
"The matching solution is the final model output.",
"We frame the problem of aligning an operation program with a math word problem as a neural machine translation (NMT) task, where the word problem x and gold operation program y form a parallel text pair.",
"The vocabulary of y includes all possible operations and arguments in our representation language.",
"For our initial sequence-to-program model, we follow the attention-based NMT paradigm of (Bahdanau et al., 2015; Cho et al., 2014).",
"We encode the source word problem text x = ( x 1 , x 2 , ..., x M ) using a bidirectional RNN encoder enc .",
"The decoder dec predicts a distribution over the vocabulary and input tokens to generate each operation or argument in the target operation program.",
"For our sequence-to-program model vocabulary, we use informed generation, in which the program tokens are generated separately from the vocabulary of operations O or arguments A .",
"The encoded text is represented by a sequence of d -dimensional hidden states h enc = ( h enc 1 , h enc 2 , .., h encM ) , where M is the length of the input text.",
"A context vector a i is computed by taking the weighted sum of the attention model weights t,i for each timestep t (1 , 2 , ..., T ) and each encoder hidden state h enci : a i = (cid:80) Mi =1 t,i h enci .",
"We compute the d -dimensional decoder hidden Multiply 50 <eos> ...",
"This prediction is conditioned on the previous tokens ( y 1 , ..., y i 1 ) and the input x to decode an entire operation program y = ( y 1 , y 2 , ..., y N ) of length N : P ( y | x ) = N (cid:89) i =1 P ( y i | y <i , x ) (4) P ( y i | y <i , x ) = g ( f ( h deci , y i , a i )) (5) Here f is a 1-layer feed-forward neural network and g is the softmax function.",
"During training time, we minimize the negative log-likelihood (NLL) using the following objective: L ( enc , dec ) = logP ( y | x ; enc , dec ) (6) At test time, we only observe the input text when predicting operation programs: y = argmax y P ( y | x ) (7) 5.2 Categorized Sequence-to-Program Model We extend our base sequence-to-program model to integrate knowledge of math word problem domain categories.",
"We modify the RNN decoder layers that compute the decoder hidden state to be category-aware.",
"Here, the category label c is deterministically computed by the category extractor (explained below).",
"It functions as a hard decision switch that determines which set of parameters to use for the hidden state computation: h deci = LST M c ( h deci 1 , y i 1 , a i ) (8) The updated objective function from equation (7) is shown below: L ( enc , decc ) = logP ( y | x ; enc , decc ) (9) The full model architecture is shown in Figure 4.",
"Domain-Specific Category Extraction We first construct a lexicon of n-grams relating to a specific domain.",
"The lexicon is a list consisting of domain-specific categories and associated n-grams.",
"For each domain category c in the lexicon, we select associated n-grams n c that occur frequently in word problems belonging to domain category c , but rarely appear in other domain categories.",
"We compute n-gram frequency f pc as the number of n-grams associated with a category c appearing in the text of a word problem p .",
"We obtain a list of potential categories for p by choosing all categories for which f pc > 0 , and then assign a category label to p based on which category has the highest n-gram frequency.",
"sequentially along with its predicted set of arguments to obtain a possible solution.",
"For each word problem p and options o , we generate a beam of the top n decoded operation programs.",
"We execute each decoded program g to find the solution from the list of options o of the problem.",
"We first choose options that are within a threshold of the executed value of g .",
"We select g as the predicted solution by checking the number of selected options and the minimum distance between the executed value of g and a possible option for p .",
"For the problems in AQuA that do not belong in any category of MathQA, we randomly choose an option.",
"Our dataset consists of 37 k problems which are randomly split in (80 / 12 / 8)% training/dev/test problems.",
"Our dataset significantly enhances the AQuA dataset by fully annotating a portion of solvable problems in the AQuA dataset into formal operation programs.",
"We carefully study the AQuA dataset.",
"Many of the problems are near-duplicates with slight changes to the math word problem stories or numerical values since they are expanded from a set of 30,000 seed problems through crowdsourcing (Ling et al., 2017).",
"These changes are not always reflected in the rationales, leading to incorrect solutions.",
"There are also some problems that are not solvable given current math word problem solving frameworks because they require a level of reasoning not yet modeled by neural networks.",
"Sequence problems, for example, require understanding of patterns that are difficult to intuit without domain knowledge like sequence formulas, and can only be solved automatically through brute-force or guessing.",
"Table 2 shows a full breakdown of the AQuA dataset by solvability.",
"6 6.2 Annotation Details We follow the annotation strategy described in Section 4 to formally annotate problems with operation programs.",
"7 6 There is overlap between unsolvable subsets.",
"For example, a sequence problem may also be a duplicate of another problem in the AQuA dataset.",
"7 We tried two other strategies of showing extra information (rationales or end solutions) to annotators to facilitate solving problems.",
"However, our manual validation showed Subset Train Valid Unsolvable No Words 37 0 Unsolvable Sequence 1,991 4 Unsolvable Requires Options 6,643 8 Unsolvable Non-numeric 10,227 14 Duplicates 17,294 0 Solvable 65,991 229 Total 97,467 254 Table 2: Full original AQuA solvability statistics.",
"Annotator Agreements and Evaluations Our expert evaluation of the annotation procedure for a collection of 500 problems shows that 92% of the annotations are valid.",
"Additionally, it has 87% agreement between the expert validation and the crowd sourcing validation task.",
"Annotation Expansion The AQuA dataset consists of a group of problems which share similar characteristics.",
"These problems can be solved with similar operation programs.",
"We find closely similar problems, replace numeric values with generic numbers, and expand annotations to cover more problems from the AQuA dataset.",
"For similarity, we use Levenshtein distance with a threshold of 4 words in edit distance.",
"We use the official python implementation of OpenNMT (Klein et al.).",
"We choose a LSTM-based encoder-decoder architecture.",
"We use Adam optimizer (Kingma and Ba, 2015), and the learning rate for training is 0 .",
"001 .",
"The hidden size for the encoder and decoder is set to d = 100 .",
"Both the encoder and decoder have 2 layers.",
"The word embedding vectors are randomly initialized.",
"At inference time, we implemented a beam search with beam size of 200 for AQuA and 100 for MathQA.",
"The program vocabulary consists of the operations O in our representation language and valid arguments A .",
"For valid arguments, we do not use their actual values since the space is very large.",
"Instead, we keep a list of numbers according to their source.",
"Constants are predefined numbers that are available to all problems.",
"Problem numbers are added to the list according to their order in the problem text.",
"Calculated numbers in the interme-that annotators mostly used those extra information to artificially build an operation program without reading the problem.",
"diate steps are added to the list according to the operation order.",
"Table 3 compares the performance of our sequence-to-program models trained on MathQA with baselines on MathQA and AQuA test sets.",
"The base model is referred to as Seq2prog, while our model with categorization is Seq2prog + cat.",
"For accuracy, the performance was measured in terms of how well the model would perform on an actual math test.",
"We observe improvement for our Seq2prog + cat model despite the fact that our training data is proportionally smaller than the AQuA dataset, and our model is much simpler than the state-of-the-art model on this dataset.",
"This indicates the effectiveness of our formal representation language to incorporate domain knowledge as well as the quality of the annotations in our dataset.",
"Qualitative Analysis Table 5 and Figure 5 show some examples of problems solved by our method.",
"We analyzed 50 problems that are solved wrongly by our system on the MathQA dataset.",
"Table 4 summarizes four major categories of errors.",
"The most common type of errors are problems that need complicated or long chain of mathematical reasoning.",
"For example, the first problem in Table 4 requires reasoning that goes beyond one sentence.",
"Other errors are due to limitations in our representation language.",
"For example, the second problem in Table 4 requires the factorization operation which is not defined in our representation language.",
"Future work can investigate more domains of mathematics such as logic, number factors, etc.",
"Some errors are due to the slightly noisy nature of our categorization strategy.",
"For example, the third problem in Table 4 is mistakenly categorized as belonging to physics domain due to the presence of words m, cm, liter in the problem text, while the correct category for the problem is geometry .",
"The final category of errors are due to problems that do not have enough textual context or erroneous problems (e.g., fourth problem in Table 4).",
"Impact of Categorization Table 3 indicates that our category-aware model outperforms the base model on both AQuA and MathQA datasets.",
"The gain is relatively small because the current model only uses categorization decisions as hard constraints at decoding time.",
"Moreover, the problem categorization might be noisy due to our use of only one mathematical interpretation for each domain-specific n-gram.",
"For example, the presence of the words square or cube in the text of a math word problem indicate that the word problem is related to the geometry domain, but these unigrams can also refer to an exponential operation ( n 2 or n 3 ).",
"To measure the effectiveness of our categorization strategy, we used human annotation over 100 problems.",
"The agreement between human annotators is 84% and their agreement with our model is 74 .",
"5% .",
"As a future extension of this work, we would like to also consider the context in which domain-specific n-grams appear.",
"Discussions As we mentioned in section 3, the continuous nature of our formalism allows us to solve problems requiring systems of equations.",
"However, there are other types of word prob-Error type Problem Hard problems ( 45% ) Jane and Ashley take 8 days and 40 days respectively to complete a project when they work on it alone.",
"Problem : A rectangular field is to be fenced on three sides leaving a side of 20 feet uncovered.",
"if the area of the field is 10 sq.",
"feet, how many feet of fencing will be required?",
"Operations : divide(10,20), multiply( #0 , const_2), add(20, 1) Problem : How long does a train 110m long running at the speed of 72 km/hr takes to cross a bridge 132m length?",
"Operations : add(110, 132), multiply(72, const_0.2778), divide( #0 , #1 ), floor( #2 ) Table 5: Problems solved correctly by Seq2prog+cat model.",
"lems that are currently unsolvable or have multiple interpretations leading to multiple correct solutions.",
"While problems that can only be solved by brute-force instead of logical reasoning and nonnarrative problems that do not fit the definition of a math word problem (in Table 2 these appear as no word) are removed from consideration, there are other problems that are beyond the scope of current models but could pose an interesting challenge for future work.",
"One example is the domain of sequence problems.",
"Unlike past word problem-solving models, our models incorporate domain-specific math knowledge, which is potentially extensible to common sequence and series formulas.",
"In this work, we introduced a representation language and annotation system for large-scale math word problem-solving datasets that addresses unwanted noise in these datasets and lack of formal operation-based representations.",
"We demonstrated the effectiveness of our representation language by transforming solvable AQuA word problems into operation formalisms.",
"Experimental results show that both our base and category-aware sequence-to-program models outperform baselines and previous results on the AQuA dataset when trained on data aligned with our representation language.",
"Our representation language provides an extra layer of supervision that can be used to reduce the influence of statistical bias in datasets like AQuA.",
"Additionally, generated operation programs like the examples in figure 5 demonstrate the effectiveness of these operation formalisms for representing math word problems in a human interpretable form.",
"The gap between the performance of our models and human performance indicates that our MathQA still maintains the challenging nature of AQuA problems.",
"In future work, we plan to extend our representation language and models to cover currently unsolvable problems, including sequence and high-order polynomial problems.",
"Acknowledgements This research was supported by ONR (N00014-18-1-2826), NSF (IIS 1616112), Allen Distinguished Investigator Award, and gifts from Google, Allen Institute for AI, Amazon, and Bloomberg.",
"We thank Marti A. Hearst, Katie Stasaski, and the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain"
] |
[
"A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.",
"The one-shot setting is inadequate, however, when the constraints the user wishes to impose on the generated text are dynamic, especially when authoring longer documents.",
"We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text.",
"To this end, we propose a novel text editing task, and introduce WikiDocEdits, a dataset of single-sentence edits extracted from Wikipedia revision histories.",
"We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations.",
"We present empirical and qualitative analyses of this model's performance.",
"1 1 Introduction A long-standing goal of natural language processing research has been to generate long-form text (Lebowitz, 1985; Fan et al., 2018; Rashkin et al., 2020).",
"Recent large generative language models such as GPT-2 (Radford et al., 2019), and GPT-3 (Brown et al., 2020), demonstrate an impressive ability to generate fluent text, but their outputs are difficult to control beyond a prompt, and they manifest a tendency to hallucinate facts (Wiseman et al., 2017).",
"Much recent work has thus focused on making such models more controllable (Keskar et al., 2019; Hu et al., 2017; Zhang et al., 2020; Dathathri et al., 2019), and factually grounded (Guu et al., 2020; Liu et al., 2018b).",
"* Work done at Microsoft Research.",
"1 All our code (including code to recreate our data) and pre-trained models will be made available at: http://microsoft.com/research/project/interactive-document-generation Barack Obama was the 44 th President of the United States.",
"Most such work only considers a one-shot generation setting.",
"Given a set of inputs, which may be a prompt, a control code (Keskar et al., 2019), or a table of data (Liu et al., 2018b) for example, the system generates text in a single step.",
"Humans, though, often produce text through an evolutionary process involving multiple draft-edit cycles.",
"This is not simply because they make mistakes when writing, but because they may require multiple iterations to help them shape and even make sense of what they want to express (Pirolli and Card, 2005).",
"For example, consider a user writing an article about Barack Obama.",
"They might start with a simple sentence such as Barack Obama was the 44th President of the United States.",
"Next, they may wish to expand on that sentence, adding information, or rephrasing it to integrate it better with the text.",
"Replicating this process in software will mean allowing users to adjust their requirements in response to model outputs.",
"Even an error-free system that meets all of a user's initial requirements does not obviate the need for iteration, since those constraints are themselves dynamic.",
"While this work focuses on text, we also note that these arguments extend to other settings where a system must generate a complex, structured object for a user, such as image or code generation.",
"The purpose of this paper is to bring into view the task of controllable text editing, as a step beyond one-shot generation towards interactive document generation.",
"A full interactive document generation system will likely comprise multiple components, possibly including one-shot generation to create a first draft.",
"Editing is crucial to interactivity because it allows users to change previously generated text to fit their dynamic constraints.",
"This is a stateful operation, where the state is the current version of the document, as opposed to stateless recasting of text from scratch using a one-shot model.",
"While services like Gram-marly or MS Word already offer rewriting suggestions, they mainly focus on syntactic or stylistic edits such as paraphrases (Gupta et al., 2018).",
"In this work, we are interested in a broader range of edits, particularly those that add or remove content, or change the meaning of text.",
"Figure 1 illustrates this editing setting with an example from our trained model, where a user produces a sentence about Barack Obama over multiple edits.",
"In sum, we make the following contributions: We introduce a challenging new text editing task, wherein a model must learn to edit text in response to a user command, while drawing on grounding to avoid problems of hallucination (Wise-man et al., 2017).",
"To accompany this task, we release an open-source dataset of sentence-level edits extracted from Wikipedia, including editor comments, which we leverage as natural language commands, together with pre-retrieved grounding documents.",
"We show that a transformer-based editing model trained on our data outperforms parrot and GPT-2 baselines, and obtains competitive results compared to gold-standard edits in human evaluations.",
"We then perform an empirical analysis of our model's performance, showing the importance of the command and grounding, and the varying difficulty of edits in our dataset.",
"We now formalize our text editing task.",
"Let D be a document, q a user command 2 , and G some appropriate form of grounding.",
"Moreover, let D (cid:48) be an edited version of D .",
"Then our task is, given a dataset of edits D = { ( D 0 , q 0 , G 0 , D (cid:48) 0 ) , ..., ( DN , q N , GN , D (cid:48) N ) } , learn to produce document D (cid:48) , given D , q , and G .",
"Note that while previous work on text editing usually only considers D as input, we include both a form of control q and grounding G .",
"The command is needed because otherwise the type of edit to be made is undefined, while the grounding provides external knowledge needed to make an edit.",
"In our specific instance of this task, we will only consider sentence-level edits.",
"More formally, we consider edits D D (cid:48) , where D and D (cid:48) differ only on a single sentence s D , respectively s (cid:48) D (cid:48) .",
"While, in general, edits can vary in complexity from document-level to character-level changes, sentences are a natural way to break down text into relatively independent units of meaning, so it makes sense to edit text one sentence at a time.",
"More complex, document-level edits can be seen as a composition of multiple sentence-level edits.",
"Additionally, we will consider user commands q written in natural language, e.g., add years in of-fice.",
"The command could also take other forms, such as a categorical variable, but natural language allows for the greatest flexibility in specifying what the edit should accomplish.",
"Moreover, natural language commands are a good fit for our model, which we will initialize with pretrained language model weights.",
"For similar reasons, we will also consider corpora of text snippets as our grounding G .",
"Alternatively, the grounding could also consist of structured data such as tables or graphs.",
"In a real user scenario, this grounding might be supplied by the user, or retrieved on the fly.",
"For our dataset, we pre-retrieve groundings by querying a commercial search engine.",
"To accompany our text editing task we present a novel dataset of nearly 12 million sentence-level edits, WikiDocEdits.",
"These edits were extracted from the revision histories in the February 1, 2020 2 This notation reflects that the edit command is analogous to a query in a retrieval or QA setting in that it expresses a form of user intent.",
"dump of English Wikipedia.",
"3 For a given Wikipedia page, a revision consists of a source and target text, corresponding to the old and new versions of the page.",
"Each revision is also accompanied by an editor comment, which we will use as a proxy for the user command.",
"For a given revision, we split the source and target texts into sentences and then attempt to match the sentences between source and target.",
"For efficiency, we only look at a k -sentence neighborhood.",
"Unmatched sentences are candidates for edits.",
"A source sentence s and target sentence t form an edit pair s t if f ( s, t ) > (cid:15) , where f is sentence-level BLEU 4 without smoothing and (cid:15) = 0 .",
"1 in our case.",
"If an unmatched source sentence does not form an edit pair with any target sentence, we consider it to be a sentence deletion.",
"This can also be thought of as matching to an empty sentence.",
"We identify sentence insertions in an analogous manner.",
"Importantly, we only consider revisions that contain a single sentence-level edit.",
"Otherwise, the editor comment that accompanies each revision may only describe one of the possibly many sentence-level edits.",
"See appendix A for a detailed description of our processing pipeline.",
"We retrieve grounding snippets for the edits in our dataset by querying a commercial search engine.",
"In order to formulate a query for a given edit, we combine the relevant page and section titles with keywords 5 from the target sentence.",
"While the target sentence is not available at test time, we make the assumption that in a real user scenario the relevant grounding would be provided by the user.",
"We retrieve the top 200 returned web page results and only keep the preview snippets returned by the search engine as the grounding corpus.",
"6 Because Wikipedia, as well as several clones, often appear in search engine results, we check for 4 -gram overlap between the target sentence and each grounding snippet, removing any snippet with more than 50% overlap.",
"Finally, we rerank 7 the retrieved snippets using an information extraction score, and merge the ranked snippets to take the first N = 512 tokens.",
"3 Downloadable from https://dumps.wikimedia.org/.",
"4 We use BLEU-4 in all experiments of this paper.",
"5 See appendix B for how we identify keywords.",
"6 We also experimented with retrieving and parsing the HTML pages from the search but this did not lead to better end-to-end performance than just using the snippets.",
"7 See appendix C for details on reranking.",
"We now provide an overview of our dataset.",
"From 667 dump files in the February 1 st 2020 dump of Wikipedia, we extract 11,850,786 edits, and take a 1% sample of 118,818 edits to run our analyses.",
"Table 1 presents summary statistics for our data, and in the following, we break down the edits by edit type, and present some examples.",
"See also appendix D for an analysis of the quality of the retrieved grounding.",
"Fluency and Content Edits We are interested in the distribution of different edit types within our dataset.",
"In particular, we want to distinguish between fluency edits, which only affect the grammar or structure of a sentence, and content edits, which change the meaning of a sentence.",
"We can lean on previous work to categorize edits on Wikipedia.",
"Yang et al. (2017) create 13 edit intention categories, and train a classifier to label revisions according to the categories.",
"We apply their classifier to our data, and group their 13 categories into fluency, content, or other edits, as reported in table 2.",
"With the caveat that the edits were labelled automatically using a trained classifier, we see that, while fluency edits make up the majority of the edits in our data, a large proportion are content edits.",
"Examples Table 3 presents some examples from our data.",
"These were chosen to illustrate a variety of edits.",
"The first example shows an elaboration edit, appending new information to the end of a sentence.",
"The second example is a simple typo fix, while the third is changing a fact.",
"Finally, the last example is a more complex edit to reword a sentence.",
"We can see that there is a large variety of edits in our dataset.",
"See table 11 in the appendix for more examples.",
"We formalize our model, which we refer to as Interactive Editor, as a standard auto-regressive sequence to sequence model.",
"Because our data only contains single-sentence edits, we assume that the sentence to be edited in the source document is given as an input to the model.",
"Given a source sentence s D , the context around s , which we will refer to as D by abuse of notation, a user command q , a grounding corpus G , and a candidate target sentence s (cid:48) , the model, f , computes f ( s, s (cid:48) , D, q, G ) = P ( s (cid:48) | s, D, q, G ) = (cid:89) i P ( s (cid:48) i | s (cid:48) <i , s, D, q, G ) , where s (cid:48) <i = { s (cid:48) 0 , ..., s (cid:48) i 1 } are the tokens preceding s (cid:48) i in s (cid:48) .",
"We use the same encoder-decoder architecture as T5 (Raffel et al., 2020) and initialize our model with pretrained language model weights.",
"The encoder-decoder architecture allows us to perform full attention over the inputs s, D, q , and G , while the decoder allows us to auto-regressively generate s (cid:48) .",
"Meanwhile, initializing with pretrained weights has been shown to achieve state-of-the-art results on many NLP tasks (Raffel et al., 2020).",
"In order to adapt T5 for our task, we represent all our inputs as sequences of tokens.",
"We then concatenate these sequences together using separator tokens, truncating and padding them to fixed lengths.",
"This is straightforward since all our inputs are text.",
"See fig.",
"2 for reference.",
"We also use the standard cross-entropy loss to train.",
"We train our model on a subset of 1,020K edits from WikiDocEdits.",
"We use a train-ing/validation/test split of 1,000K/10K/10K edits, and train for 3 epochs with a fixed learning rate of 0.0001, and a batch size of 128.",
"We use the T5-base implementation from Huggingface (Wolf et al., 2020), and finetune all weights in the model.",
"We validate every 200 steps and select the model with the lowest validation loss.",
"For inference we use beam search with a beam width of 5, and keep the 5 highest ranked candidates, excluding any generation that parrots the source as this corresponds to making no edits.",
"Metrics We consider several metrics to evaluate our model.",
"One natural metric to consider is BLEU ((Papineni et al., 2002)).",
"BLEU shows high correlation with human judgement on machine translation (Papineni et al., 2002; Dodding-ton, 2002).",
"While this should not a priori transfer to evaluating different tasks, our task in fact bears a high similarity to machine translation because of how the output is constrained by the inputs.",
"If, for example, the source sentence in an English to German translation task is Sally met Lucy, the German translation must in some way mention Sally and Lucy.",
"Similarly, in our task, if the source sentence is Barack Obama was the 44th President of the United States, and the command is add birth date, the edit must somehow mention a birth date somewhere.",
"Thus, in our setting, BLEU makes sense as a metric since in prin-ciple a good model output should not deviate too far from the reference.",
"We use macro-averaged Comment added class of '13 Source Krishna attended Dartmouth College where she was a double major in government and French.",
"sentence-level BLEU with epsilon smoothing and equally weighted n -grams, with n up to 4 .",
"One issue with BLEU is that the source and target sentences in our task are already very similar, so a model that simply parrots back the source sentence could achieve an unduly high score.",
"Therefore, we also evaluate model outputs by comparing the word-level edits made by the model against the reference, where a word-level edit is a tuple of an operation, either insertion or deletion, a position, and a word.",
"For example, in the edit Barack Obama was the 44 th President of the United States Barack Obama, born August 4 th 1961, was the 44 th President of the United States, the set of word edits would look like { ( insert , 2 , , ) , ( insert , 3 , born ) , ... } .",
"Now, denote the set of word edits between two sentences a and b as WE ( a, b ) .",
"Then, with s the source sentence, s (cid:48) the reference target sentence and h the target sentence generated by the model, we compute the precision PWE ( s (cid:48) , h, s ) = | WE ( s (cid:48) , s ) WE ( h, s ) | | WE ( h, s ) | , recall, RWE ( s (cid:48) , h, s ) = | WE ( s (cid:48) , s ) WE ( h, s ) | | WE ( s (cid:48) , s ) | , and F1 score, F 1 , WE ( s (cid:48) , h, s ) = 2 PWE RWEPWE + RWE .",
"Finally, we compute sentence-level accuracy, which reports the proportion of edits for which the model output exactly matched the reference.",
"Baselines We use two baselines to compare our model to.",
"First, we consider the parrot baseline that simply outputs the source sentence as is.",
"The second baseline attempts to delete the source sentence and replace it with a new sentence.",
"We use a pretrained GPT-2 model (Radford et al., 2019) that generates a sentence given the left context.",
"Table 5 presents our main results.",
"Notice that the parrot baseline is able to achieve a considerably high BLEU score, as expected, while the GPT-2 baseline surprisingly achieves a high word edit recall score.",
"Our interactive neural editor model is able to beat both baselines across all metrics, as would be expected.",
"Even on a harsh metric like accuracy our model achieves a nontrivial score, although we suspect most of the edits that the model gets exactly right are fluency edits.",
"See table 6 for Comment Added more marriage info.",
"Ablations The middle rows of Table 5 show the results for three ablations of our model.",
"The first ablation removes everything but the source sentence s .",
"This is similar to the paraphrase setting (Gupta et al., 2018), and the editing setting in Faruqui et al. (2018) and Yin et al. (2018).",
"We can see that including the context, grounding, and command as additional inputs yields significant improvements over only using the source sentence.",
"We can also see from the second ablation that the commands are a crucial element in the model's performance.",
"This is not surprising since without a command the model must guess what type of edit to make.",
"Similarly, the model without grounding performs considerably worse than the full model, showing that the grounding is equally important as the command.",
"Surprisingly, the last two ablations perform only marginally better than the first, meaning that removing the grounding in addition to the commands, or vice-versa, does not lead to a large drop in performance.",
"This seems to suggest a synergistic effect between the command and the grounding, which makes sense since the model would not know what to do with the grounding without a command, and likewise, the model would not have access to the right information without the grounding, even if it knew what to edit from the command.",
"Breakdown by edit type The results of our full model are broken down by edit intention labels in Table 6.",
"The columns report the same metrics as in our main table of results, with the exception of S-BLEU, which reports the BLEU score between the source sentence and target, and the last column, which reports the number of test edits that were classified into each category.",
"With the caveat that intention labels come from an automatic classifier and not human annotation, we can observe that our model has varying performance across different types of edits.",
"The model performs very well on fluency edits, but worse on content edits.",
"This comes at no surprise given that fluency ed-Intention Category Acc.",
"its should be easier as they usually correct minor mistakes, which a language model should be able to detect from pretraining.",
"Content edits, on the other hand, require pulling the correct information from the grounding and incorporating it in the correct manner into the sentence.",
"The S-BLEU scores confirm this since the source sentences in the fluency examples are much more similar to the target sentences than for the content edits.",
"In fact, when looking at the absolute improvement of the BLEU over the S-BLEU scores, the model performs equally well on both types of edits.",
"We conducted two rounds of human evaluations, each time across 200 examples from our test set.",
"Annotators were crowd sourced, and each example was rated by seven judges for a total of 1400 judgements.",
"8 Command and Grounding In our first round of human evaluations we compared our model's top output from beam search to the reference edit.",
"There were two tasks.",
"In the first task, we asked judges to choose which system better accomplished the command q .",
"In the second, we asked which system was more faithful to the grounding G .",
"Table 7 presents the results.",
"Although there is a clear preference for the Reference edits in the command-related task, 59% of judgments suggest that Interactive Editor may be equal to or better 8 The annotators were remunerated at a rate above the prevailing Seattle minimum wage at the time.",
"than the reference.",
"9 In the grounding task, Interactive Editor demonstrates good correspondence with the background material.",
"10 Judges were further asked whether the retrieved grounding was relevant to the context D : 92.86% of judgments recorded the grounding as either Somewhat rele-vant or Very relevant.",
"Absolute Scoring We also evaluated the overall quality of model outputs.",
"We considered our full model, and our ablated model that only takes the source sentence as input.",
"We also considered showing and hiding the edit commands, for a total of 4 settings.",
"For a given setting, raters were asked whether they found each of the top 3 model outputs satisfactory.",
"Table 8 presents the results for the top model outputs, with bootstrapped p-values for pairwise comparisons.",
"We use a Bon-ferroni corrected = 0 .",
"0125 to determine significance.",
"Note that our full model outperforms our ablated model in the first two comparisons.",
"Inter-9 The high percentage of Neutral judgments here may be partially attributable to other factors.",
"Majority Neutral judgments are observed for approximately 65% of those examples that received at least one Neutral judgment.",
"This suggests many commands may not be readily interpretable to judges.",
"10 Appendix E presents some additional automatic metrics to measure the faithfulness of the model to the grounding.",
"estingly, the difference is smaller when the raters are not shown the commands.",
"Additionally, only the ablated model is rated differently depending on whether the commands are shown.",
"This is to be expected since the ablated model is not likely to be faithful to the commands.",
"In addition to reporting the mean scores from the raters, we can also look at the number of examples where at least one of the top model outputs was found satisfactory by human judges (i.e. scored higher than 3).",
"We find that, when showing the edit commands, at least one of the outputs from our full model was satisfactory in 85 .",
"83 % of cases versus 60 .",
"17 % for the ablated model.",
"This paper focuses on the task of editing individual sentences, which we believe to be a challenging task for NLP, as it involves making nuanced changes to text according to natural language commands.",
"We also believe this task has useful applications, particularly in speech-to-text scenarios, where it may be more convenient to speak out a command rather than edit the text directly.",
"However, we also wish to emphasize that this task is a step towards a larger goal of interactive document generation, and that there are many interesting future directions to explore in this space.",
"While this paper has focused on single interactions (i.e. making isolated edits to text), it would be worth modeling multiple interactions between the user and model.",
"One can imagine that there may be a natural order in which to make edits, such as adding information at the start, and fine-tuning the language at the end.",
"It is an open question whether or not a model could learn this.",
"For illustration, table 9 gives an example of using our model to make several edits in order to create a sentence.",
"Ultimately, this may look more like a dialogue than a sequence of commands coming from the user.",
"Additionally, it would also be interesting to look at other settings where a model must generate a complex, structured object for a user, such as code, or images.",
"We hope that our text editing task, as a first step, can demonstrate the potential for interactive generation systems, and that it will encourage the community to pursue more ideas in this space.",
"Grounded Generation Large language models can generate fluent text (Radford et al., 2019; Brown et al., 2020; Raffel et al., 2020), but they have a tendency to hallucinate facts (Wiseman et al., 2017).",
"Thus, several works have explored using various forms of grounding to enable models to generate factually consistent texts (Koncel-Kedziorski et al., 2019; Liu et al., 2018b; Prab-humoye et al., 2019; Liu et al., 2018a; Guu et al., 2020).",
"Our work uses grounding to ensure that edits are factually correct, although our task differs from previous work because of the user command, which requires specific information to be retrieved from the grounding during generation.",
"Controllable Generation While grounding can be seen as a way to implicitly control the contents of generated text, other works have explored more explicit forms of control.",
"Hokamp and Liu (2017) and Zhang et al. (2020) use lexical constraints, while Keskar et al. (2019) and Dathathri et al. (2019) control higher level attributes of text, such as style, tone, or topic.",
"Our task instead uses natural language commands, which can flex-ibly express different types of constraints, ranging from low-level lexical ones, to high-level topical ones.",
"In this sense, we can also draw the parallel to dialog response generation (Ghazvininejad et al., 2018; Dinan et al., 2018), task-oriented dialog (Gao et al., 2018), or open domain question answering (Min et al., 2019; Chen et al., 2017), that also involve user responses or queries, although these tasks are not concerned with text generation in the context of document creation.",
"Story Generation The task of Document Generation considered in our work bears similarity with work on generating long-form narratives (Jain et al., 2017).",
"While earlier work in Story Generation focused more on plan-based architectures (Lebowitz, 1985), more recent work moved towards end-to-end approaches (Fan et al., 2018) allowing generation to be unconstrained and creative.",
"As narratives are often aimed at particular goals expressed in terms of outlines and plans, much of the literature in Story Generation is framed as a form of controllable generation, using storylines (Peng et al., 2018), events (Martin et al., 2017; Harrison et al., 2017), plot words or word skeletons (Xu et al., 2018; Ippolito et al., 2019), plans (Yao et al., 2019), story ending (Tambwekar et al., 2019), and outlines (Rashkin et al., 2020) as various forms of constraints.",
"Our work takes a significantly different approach, as we treat document or story generation as an iterative process that allows a human to generate a full document from scratch, but also allows constraints to be more dynamic (e.g., add nationality in Table 9 only if the system missed that the first time).",
"Text Editing Several previous works have focused on text editing.",
"Guu et al. (2018) generate sentences by editing prototypes taken from their training corpus, although they use editing only as a means for language modeling.",
"Wu et al. (2019) expand upon Guu et al. (2018)'s setting, but for dialog.",
"More related to our own setting, Faruqui et al. (2018) propose WikiAtomicEdits, a dataset of edits crawled from Wikipedia.",
"However, they consider a much narrower definition of edits than our data does.",
"Yin et al. (2018) use WikiAtomicEdits and propose the task of learning to represent edits, which Marrese-Taylor et al. (2020) expand using a variational approach.",
"In contrast, we are more interested in generating edits rather than representing them.",
"Related to Wikipedia data, Pryzant et al. (2020) also used Wikipedia revision histories to learn to debias text, whereas we considered general edits.",
"Iso et al. (2020) propose a fact-based text editing task, but they do not consider control or other types of edits.",
"Another related task to text editing is text paraphrasing (Gupta et al., 2018), however paraphrasing usually conserves the meaning of a sentence.",
"While the edits we consider include meaning-preserving edits, we are mostly interested in edits that affect meaning.",
"In this work we argued that text generation should be interactive, and, as a means towards that end, we proposed a general text editing task, where a system must edit a document in response to a user command.",
"In our specific instance of the task we considered single-sentence edits, and we crawled a dataset of several million edits from Wikipedia that included commands, in the form of editor comments, as well as grounding documents.",
"We then showed that training a transformer-based model on our data, while initializing with pretrained language model weights, yields encouraging results on both automatic and human evaluations.",
"Additionally, our ablation studies showed the crucial role played by the user command and grounding.",
"Breaking down our results by types of edits, we saw that our model not only performs well on easier fluency edits, but also on much harder content edits.",
"Finally, we discussed future research directions for interactive document generation, as well as possible extensions to other domains such as images or code.",
"The authors would like to thank Thomas Hofmann, as well as Sudha Rao, Matt Richardson, Zhang Li, Kosh Narayanan, and Chandra Chikkareddy for their helpful suggestions."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"method",
"other",
"objective",
"method",
"other",
"other",
"method",
"objective",
"method",
"result",
"result",
"result",
"abstain",
"other"
] |
[
"Interpretability is an important aspect of the trustworthiness of a model's predictions.",
"Transformer's predictions are widely explained by the attention weights, i.e., a probability distribution generated at its self-attention unit (head).",
"Current empirical studies provide shreds of evidence that attention weights are not explanations by proving that they are not unique.",
"A recent study showed theoretical justifications to this observation by proving the non-identifiability of attention weights.",
"For a given input to a head and its output, if the attention weights generated in it are unique, we call the weights identifiable.",
"In this work, we provide deeper theoretical analysis and empirical observations on the identifiability of attention weights.",
"Ignored in the previous works, we find the attention weights are more identifiable than we currently perceive by uncovering the hidden role of the key vector.",
"However, the weights are still prone to be non-unique attentions that make them unfit for interpretation.",
"To tackle this issue, we provide a variant of the encoder layer that decouples the relationship between key and value vector and provides identifiable weights up to the desired length of the input.",
"We prove the applicability of such variations by providing empirical justifications on varied text classification tasks.",
"The implementations are available at https://github.com/declare-lab/ identifiable-transformers .",
"Widely adopted Transformer architecture (Vaswani et al., 2017) has obviated the need for sequential processing of the input that is enforced in traditional Recurrent Neural Networks (RNN).",
"As a result, compared to a single-layered LSTM or RNN model, a single-layered Transformer model is computationally more efficient, reflecting in a relatively shorter training time (Vaswani et al., 2017).",
"This advantage encourages the training of deep Transformer-based language models on large-scale datasets.",
"Their learning on large corpora has already attained state-of-the-art (SOTA) performances in many downstream Natural Language Processing (NLP) tasks.",
"A large number of SOTA machine learning systems even beyond NLP (Lu et al., 2019) are inspired by the building blocks of Transformer that is multi-head self-attention (Rad-ford et al., 2018; Devlin et al., 2018).",
"A model employing an attention-based mechanism generates a probability distribution a = { a 1 , . . . , a n } over the n input units z = { z 1 , . . . , z n } .",
"The idea is to perform a weighted sum of inputs, denoted by (cid:80) ni =1 a i z i , to produce a more context-involved output.",
"The attention vector, a , are commonly interpreted as scores signifying the relative importance of input units.",
"However, counter-intuitively, it is recently observed that the weights generated in the model do not provide meaningful explanations (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019).",
"Attention weights are (structurally) identifiable if we can uniquely determine them from the output of the attention unit (Brunner et al., 2019).",
"Identifiability of the attention weights is critical to the model's prediction to be interpretable and replicable.",
"If the weights are not unique, explanatory insights from them might be misleading.",
"The self -attention transforms an input sequence of vectors z = { z 1 , . . . , z n } to a contextualized output sequence y = { y 1 , . . . , y n } , where y k = (cid:80) ni =1 a ( k,i ) z i .",
"The scalar a ( k,i ) captures how much of the i th token contributes to the contextual-ization of k th token.",
"A Transformer layer consists of multiple heads, where each head performs self-attention computations, we break the head computations in two phases: Phase 1: Calculation of attention weights a ( k,i ) .",
"key and query vectors.",
"The dot product of k th query vector and i th key vector gives a ( k,i ) .",
"Phase 2: Calculation of a contextualized representation for each token.",
"It involves mapping input tokens to the value vectors.",
"The contextualized representation for k th token can be computed by the weighted average of the value vectors, where the weight of i th token is a ( k,i ) computed in first phase.",
"The identifiability in Transformer has been recently studied by Brunner et al. (2019) which provides theoretical claims that under mild conditions of input length, attention weights are not unique to the head's output.",
"Essentially their proof was dedicated to the analysis of the computations in the second phase, i.e., token contextualization.",
"However, the theoretical analysis ignored the crucial first phase where the attention weights are generated.",
"Intrinsic to their analysis, the attention identifiability can be studied by studying only the second phase of head computations.",
"However, even if we find another set of weights from the second phase, it depends on the first phase if those weights can be generated as the part of key-query multiplication.",
"In this work, we probe the identifiability of attention weights in Transformer from a perspective that was ignored in Brunner et al. (2019).",
"We explore the previously overlooked first phase of self-attention for its contribution to the identifiability in Transformer.",
"During our analysis of the first phase, we uncover the critical constraint imposed by the size of the key vector 1 d k .",
"The flow of analysis can be described as We first show that the attention weights are identifiable for the input sequence length d s no longer than the size of value vector d v (3.1) (Brunner et al., 2019) 2 .",
"For the case when d s > d v , we analyse the attention weights as raw dot-product (logits) and the softmax ed dot-product (probability sim-plex), independently.",
"An important theoretical finding is that both versions are prone to be unidentifiable.",
"In the case of attention weights as logits (3.2.1), we analytically construct another set of attention weights to claim the unidentifiability.",
"softmax ed logits (3.2.2), we find the attention identifiability to be highly dependent on d k .",
"Thus, the size of key vector plays an important role in the identifiability of the self-attention head.",
"The pieces of evidence suggest that the current analysis in Brunner et al. (2019) ignored the crucial constraints from the first phase in their analysis.",
"To resolve the unidentifiability problem, we propose two simple solutions (4).",
"For the regular setting of the Transformer encoder where d v depends on the number of attention heads and token embedding dimension, we propose to reduce d k .",
"This may lead to more identifiable attention weights.",
"Alternatively, as a more concrete solution, we propose to set d v equal to token embedding dimension while adding head outputs as opposed to the regular approach of concatenation (Vaswani et al., 2017).",
"Embedding dimension can be tuned according to the sequence length up to which identifiability is desired.",
"We evaluate the performance of the proposed variants on varied text classification tasks comprising of ten datasets (5).",
"In this paper, our goal is to provide concrete theoretical analysis, experimental observations, and possible simple solutions to identifiability of attention weights in Transformer.",
"The idea behind identifiable variants of the Transformer isthe harder it is to obtain alternative attention weights, the likelier is they are identifiable, which is a desirable property of the architecture.",
"Thus, our contribution are as follows: We provide a concrete theoretical analysis of identifiability of attention weights which was missing in the previous work by Brunner et al. (2019).",
"We provide Transformer variants that are identifiable and validate them empirically by analysing the numerical rank of the attention matrix generated in the self-attention head of the Transformer encoder.",
"The variants have strong mathematical support and simple to adopt in the standard Transformer settings.",
"We provide empirical evaluations on varied text classification tasks that show higher identifiability does not compromise with the task's performance.",
"A general trend in machine learning research is to mathematically model the input-output relationship from a dataset.",
"This is carried out by quantitatively estimating the set of model parameters that best fit the data.",
"The approach warrants prior (to fitting) examination of the following aspects: The sufficiency of the informative data to the estimate model parameters, i.e., practical identifiability.",
"Thus, the limitation comes from the dataset quality or quantity and may lead to ambiguous data interpretations (Raue et al., 2009).",
"The possibility that the structure of the model allows its parameters to be uniquely estimated, irrespective of the quality or quantity of the available data.",
"This aspect is called structural identifiability.",
"A model is said to be structurally unidentifiable if a different set of parameters yield the same outcome.",
"In this work, we focus on the structural identifiability (Bellman and Astrom, 1970).",
"It is noteworthy that the goodness of the fit of a model on the data does not dictate its structural identifiability.",
"Similar to Brunner et al. (2019), we focus our analysis on the identifiability of attention weights, which are not model parameters, yet demands meaningful interpretations and are crucial to the stability of representations learned by the model.",
"We base our analysis on the building block of Transformer, i.e., the encoder layer (Vaswani et al., 2017).",
"The layer has two sub-layers.",
"First sublayer performs the multi-head self-attention, and second is feed-forward network.",
"Given a sequence of tokens { x 1 , . . . , x d s } , an embedding layer transforms it to a set of vector { z 1 , . . . , z d s } R d e , where d e denotes token embedding dimension.",
"To this set, we add vectors encoding positional information of tokens { p 1 , . . . , p d s } R d e .",
"Multi-head Attention.",
"Input to a head of multihead self-attention module is W R d s d e , i.e., a sequence of d s tokens lying in a d e -dimensional embedding space.",
"Tokens are projected to d q -size query, d k -size key, and d v -size value vectors using linear layers, resulting in the respective matrices Query Q R d s d q , Key K R d s d k , and Value Figure 1: An illustration for a Transformer with two-head attention units.",
"V R d s d v .",
"The attention weights A R d s d s can be computed by A = softmax (cid:32) Q KT (cid:112) d q (cid:33) .",
"The ( i, j ) th element of A shows how much of i th token is influenced by j th token.",
"The output of a head H R d s d e is given by H = A V D = A T , (2) where D R d v d e is a linear layer and the matrix T R d s d e denotes the operation V D .",
"The R d s d e output of multi-head attention can be expressed as a summation over H obtained for each head 3 .",
"The i th row of multi-head output matrix corresponds to the d e dimensional contextualized representation of i th input token.",
"In the original work, Vaswani et al. (2017), the multi-head operation is described as the concatenation of A V obtained from each head followed by a linear transformation D R d e d e .",
"Both the explanations are associated with the same sequence of matrix operations as shown in fig.",
"1.",
"In regular Transformer setting, a token vector is t i { ( z j + p j ) } d s i =1 is d e = 512 dimensional, number of heads h =8, size of d k = d q = d v = d e /h =64.",
"Feed-Forward Network.",
"This sub-layer performs the following transformations on each token representation at the output of a head: y 1 = Linear 1 (Norm( t i + head output for t i )) y 2 = Norm( t i + ReLU(Linear 2 ( y 1 ))) Linear 1 and Linear 2 are linear layers with 2048 and 512 nodes, respectively.",
"Norm denotes mini-batch layer normalization.",
"3 For simplicity, we have omitted head indices.",
"The output of an attention head H is the product of A and T (eq.",
"(2)).",
"Formally, we define identifiability of attention in a head: Definition 3.1.",
"For an attention head's output H , attention weights A are identifiable if there exists a unique solution of A T = H .",
"The above definition can be reformulated as Definition 3.2.",
"A is unidentifiable if there exist an A , ( A (cid:54) = 0 ) , such that ( A + A ) is obtainable from phase-1 of head computations and satisfy ( A + A ) T = A T = A T = 0 .",
"Under this constraint, we get a i T = 0 where a i is the i th row of A .",
"The set of vectors which when multiplied to T gets mapped to zero describes the left null space of T denoted by LN( T ) .",
"The dimension of the left null space of T can be obtained by taking the difference of the total number of rows ( d s ) and the number of linearly independent rows, i.e, rank of the matrix T denoted by rank( T ) .",
"Let dim( ) denotes the dimension of a vector space, then LN( T ) = { v | v TT = 0 } (3) dim (cid:0) LN( T ) (cid:1) = d s rank( T ) .",
"If dim(LN( T )) = 0 then LN( T ) = { 0 } , it leads to the only solution of constraint-R1 that is A = 0 .",
"Therefore, the unidentifiabilty condition does not hold.",
"Now we will prove such a situation exists when the number of tokens is not more than the size of value vector.",
"The matrix T in eq.",
"(2) is product of d s d v value matrix V and d v d e transformation D .",
"We utilize the fact that the rank of product of two matrices P and Q is upper bounded by the minimum of rank( P ) and rank( Q ) , i.e., rank( P Q ) min (cid:0) rank( P ) , rank( Q ) (cid:1) .",
"Thus, the upper bound on rank( T ) in eq.",
"(4) can be determined by rank( T ) min (cid:16) rank( V ) , rank( D ) (cid:17) min (cid:16) min( d s , d v ) , min( d v , d e ) (cid:17) min (cid:16) d s , d v , d v , d e (cid:17) min (cid:16) d s , d v (cid:17) (as d e > d v ) = min (cid:16) d s , 64 (cid:17) (5) where the last inequality is obtained for a head in the regular Transformer for which d v =64.",
"Numerical rank.",
"To substantiate the bounds on rank( T ) as derived above, we set up a model with a single encoder layer (6).",
"The model is trained to predict the sentiment of IMDB reviews (5).",
"We feed the review tokens to the model and store the values generated in T of the first head.",
"A standard technique for calculating the rank of a matrix with floating-point values and computations is to use singular value decomposition.",
"The rank of the matrix will be computed as the number of singular values larger than the predefined threshold 4 .",
"The fig.",
"2 illustrates how the rank changes with the sequence length d s .",
"The numerical rank provides experimental support to the theoretical analysis.",
"rank( T ) = (cid:26) d s if d s d v , d v if d s > d v .",
"(6) Thus, dim (cid:0) LN( T ) (cid:1) = d s rank( T ) = (cid:26) 0 if d s d v , ( d s d v ) if d s > d v .",
"= max ( d s d v , 0) (7) With this, we infer A is identifiable if d s d v = 64 .",
"For the identifiability study, since we focus on a model's capability of learning unique attention weights, we will assume T has the maximum obtainable rank set by its upper bound.",
"In this case, from eq.",
"(7), we obtain a non zero value of dim (cid:0) LN( T ) (cid:1) .",
"It allows us to find infi-nite A 's satisfying ( A + A ) T = A T .",
"However, 4 The threshold value is max( d s , d e ) eps || T || 2 .",
"The eps is floating-point machine epsilon value, i.e., 1.19209e-07 in our experiments constraint-R1 demands A to be obtainable from the first phase of self-attention.",
"As a first step, we focus our analysis on the attention matrix without applying softmax non-linearity, i.e., A = (cid:18) QKT d q (cid:19) .",
"The analysis is crucial to identify constraints coming from the first phase of self-attention in Transformer that impact identifiability.",
"Insights from this will help us analyse softmax version of A .",
"Since the logits matrix A is obtained from the product of Q and KT , we can assert that",
"rank( A ) min (cid:0) rank( Q ) , rank( KT ) (cid:1) min (cid:0) d e , d k , d q , d e (cid:1) = d k .",
"(8) Therefore, the rank of attention matrix producible by the head in the first phase of self-attention can at most be equal to the size of key vectors d k .",
"On this basis, the head can produce only those A + A satisfying rank( A + A ) d k (constraint-R2) Proposition 3.3.",
"Proof.",
"Let a 1 , . . . , a d s and a 1 , . . . , a d s denote rows of A and A , respectively.",
"Without the loss of generality, let a 1 , . . . , a d k be linearly independent rows.",
"For all j > d k , a j can be represented as a linear combination (cid:80) d k i =1 ji a i , where ji is a scalar.",
"Next, we independently choose first k rows of A that are { a 1 , . . . , a d k } from LN( T ) .",
"From the same set of coefficients of linear combination ji for i { 1 , . . . , d k } and j { d k +1 , . . . , d s } , we can construct j th row of A as a j = (cid:80) d k i =1 ji a i .",
"Now, since we can construct the j th row of ( A + A ) from the linear combination of its first d k rows as (cid:80) d k i =1 ji ( a i + a i ) , the rank of ( A + A ) is not more than d k .",
"For a set of vectors lying in a linear space, a vector formed by their linear combination should also lie in the same space.",
"Thus, the artificially constructed rows of A belongs to LN( T ) .",
"Therefore, there exist an A that establishes the proposition which claims the unidentifiability of A .",
"The softmax over attention logits generates attention weights with each row of A (i.e., a i 's) is constrained to be a probability distribution.",
"Hence, we can define constraint over A as ( A + A ) 0 (P1) A T = 0 (P2) A 1 = 0 .",
"P1 is non-negativity constraint on ( A + A ) as it is supposed to be the output of softmax ; P2 denotes A LN( T ) ; P3 can be derived from the fact ( A + A ) 1 = 1 = ( A 1 + A 1 ) = 1 = A 1 = 0 as ( A 1 = 1 ).",
"Where 1 R d s is the vector of ones.",
"The constraint in P2 and P3 can be combined and reformulated as A [ T , 1 ] = 0 .",
"Following the similar analysis as in eq.",
"(7), we can obtain dim (cid:0) LN([ T , 1 ]) (cid:1) = max (cid:0) d s ( d v + 1) , 0 (cid:1) .",
"Disregarding the extreme cases when a i is a one-hot distribution, Brunner et al. (2019) proved the existence and construction of non-trivial A 's satisfying all the constraints P1, P2, and P3.",
"5 However, the proof by Brunner et al. (2019) missed the constraint-R2, hence the existence of a non-trivial A satisfying only the set of constraints P1, P2 and P3 may not be a valid proposition to claim attention weights unidentifiability.",
"Essentially, the work largely ignored the constraints coming from the rank of the matrix that produces A after softmax 6 .",
"Let A l denote logits (cid:18) QKT d q (cid:19) and softmax( A l ) = ( A + A ) , where softmax is operated over each row of A l .",
"We add an extra constraint on A l rank( A l ) d k .",
"The constraint P4 confirms if there exists a logit matrix A l that can generate ( A + A ) , given constraints P1, P2, and P3 are satisfied.",
"The possibility of such an A l will provide sufficient evidence that A is unidentifiable.",
"Next, we investigate how the existence of A is impacted by the size of key vector d k (query and key vector sizes are the same, i.e., d q = d k ).",
"Let ( A + A )( i, k ) denotes ( i, k ) th element of the matrix.",
"We can retrieve the set of matrices A l such that softmax( A l ) = A + A , where A l ( i, k ) = c i + log( A + A )( i, k ) (9) 5 For the sake of brevity, we skip the construction method.",
"of A l can be written as c + a 1 , . . . , c + a d s .",
"For an arbitrarily picked A satisfying constraint P1, P2, and P3, the dimensions of affine span S of { a 1 , . . . , a d s } could be as high as d s 1 (fig. 4).",
"In such cases, the best one could do is to choose a c a S such that the dimension of the linear span of { a 1 c a , . . . , a d s c a } , i.e., rank( A l ) is d s 1 .",
"Hence, to satisfy P4, d s 1 d k = d s d k + 1 .",
"Thus, the set of ( A + A ) satisfying constraint P1, P2 and P3 are not always obtainable from attention head for d s > d k .",
"We postulate Although it is easier to construct A satisfying constraints P1, P2 and P3, it is hard to construct A satisfying constraint P4 over the rank of logit matrix A l .",
"Therefore, A becomes more identifiable as the size of key vector decreases.",
"Experimental evidence.",
"We conduct an experiment to validate the minimum possible numerical rank of A l by constructing A .",
"For A to be obtainable from the phase 1, the minimum possible rank of A l should not be higher than d k .",
"From IMDB dataset (5), we randomly sample a set of reviews with token sequence length d s ranging from 66 to 128 7 .",
"For each review, we construct 1000 A 's satisfying constraints P1, P2, and P3 First, we train a Transformer encoder-based IMDB review sentiment classifier (6).",
"We obtain an orthonormal basis for the left null space of [ T , 1 ] using singular value decomposition.",
"To form an A , we generate d s random linear combinations of the basis vectors (one for each of its row).",
"Each set of linear combination coefficients is sampled uniformly from [ 10 , 10] .",
"All the rows are then scaled to satisfy the constraint P1 as mentioned in Brunner et al. (2019).",
"Using eq.",
"(9), we obtain a minimum rank matrix A l 's by putting c = a 1 .",
"Figure 5 depicts the obtained numerical rank of A l .",
"We observed all the obtained A l from ( A + A ) (using eq.",
"(9)) are full-row rank matrices.",
"However, from the first phase of self-attention, the maximum obtainable rank of A l is d k = 64 .",
"Thus, the experimentally constructed A l 's do not claim unidentifiability of A as it fails to satisfy the constraint P4, while for Brunner et al. (2019), it falls under the solution set to prove unidentifiability as it meets constraints P1, P2 and P3.",
"Based on the Identifiability analysis in 3, we propose basic solutions to make Transformer's attention weights identifiable.",
"Decoupling d k .",
"Contrary to the regular Transformer setting where d k = d v , a simple approach is to decrease the value of d k that is the size of the key and query vector.",
"It will reduce the possible 7 dim (cid:0) LN( T , 1 ) (cid:1) > 0 for d s > d v + 1 = 65 solutions of A by putting harder constraints on the rank of attention logits, i.e., A l in eq.",
"(9).",
"However, theoretically, d k decides the upper bound on dimensions of the space to which token embeddings are projected before the dot product.",
"Higher the upper bound, more degree of freedom to choose the subspace dimensions as compared to the lower d k variants.",
"Thus, there is a plausible trade-off when choosing between d k induced identifiability and the upper bound on the dimension of projected space.",
"Head Addition.",
"To resolve the unidentifiability issue when sequence length exceeds the size of value vector, we propose to keep the value vector size and token embedding dimension to be more than (or equal to) the maximum allowed input tokens, i.e., d v d s -max .",
"In Vaswani et al. (2017), d v was bound to be equal to d e /h , where d e is token embedding dimension and h is number of heads.",
"This constraint on d v is because of the concatenation of h self-attention heads to produce d e -sized output at the first sub-layer of the encoder.",
"Thus, to decouple d v from this constraint, we keep d v = d e and add each head's output.",
"8 5 Classification Tasks For the empirical analysis of our proposed solutions as mentioned in 4, we conduct our experiments on the following varied text classification tasks: 5.1 Small Scale Datasets IMDB (Maas et al., 2011).",
"The dataset for the task of sentiment classification consist of IMDB movie reviews with their sentiment as positive or negative.",
"Each of the train and test sets contain 25,000 data samples equally distributed in both the sentiment polarities.",
"TREC (Voorhees and Tice, 2000).",
"We use the 6-class version of the dataset for the task of question classification consisting of open-domain, facet-based questions.",
"There are 5,452 and 500 samples for training and testing, respectively.",
"SST (Socher et al., 2013).",
"Stanford sentiment analysis dataset consist of 11,855 sentences obtained from movie reviews.",
"We use the 3-class version of the dataset for the task of sentiment classification .",
"Each review is labeled as positive, neutral, or negative.",
"The provided train/test/valid split is 8,544/2,210/1,101.",
"8 d s -max < d e as in the regular Transformer setting.",
"SNLI (Bowman et al., 2015).",
"The dataset contain 549,367 samples in the training set, 9,842 samples in the validation set, and 9,824 samples in the test set.",
"For the task of recognizing textual entailment , each sample consists of a premise-hypothesis sentence pair and a label indicating whether the hypothesis entails the premise, contradicts it, or neutral.",
"Yelp.",
"We use the large-scale Yelp review dataset for the task of binary sentiment classification .",
"There are 560,000 samples for training and 38,000 samples for testing, equally split into positive and negative polarities.",
"DBPedia.",
"The Ontology dataset for topic classification consist of 14 non-overlapping classes each with 40,000 samples for training and 5,000 samples for testing.",
"Sogou News.",
"The dataset for news article classification consist of 450,000 samples for training and 60,000 for testing.",
"Each article is labeled in one of the 5 news categories.",
"The dataset is perfectly balanced.",
"AG News.",
"The dataset for the news articles classification partitioned into four categories.",
"The balanced train and test set consist of 120,000 and 7,600 samples, respectively.",
"Amazon Reviews.",
"For the task of sentiment classification , the dataset contain 3,600,000 samples for training and 400,000 samples for testing.",
"The samples are equally divided into positive and negative sentiment labels.",
"Except for the SST and SNLI, where the validation split is already provided, we flag 30% of the train set as part of the validation set and the rest 70% were used for model parameter learning.",
"Setting up the encoder.",
"We normalize the text by lower casing, removing special characters, etc. 9 9 https://pytorch.org/text/_modules/ torchtext/data/utils.html For each task, we construct separate 1-Gram vocabulary ( U ) and initialize a trainable randomly sampled token embedding ( U d e ) from N (0 , 1) .",
"Similarly, we randomly initialize a ( d s -max d e ) positional embedding.",
"The encoder (2.2) takes input a sequence of token vectors ( d s d e ) with added positional vectors.",
"The input is then projected to key and query vector of size d k { 1 , 2 , 4 , 8 , 16 , 32 , 64 , 128 , 256 } .",
"For the regula r Transformer setting, we fix the number of heads h to 8 and the size of value vector d v = d e /h that is 64.",
"For each token at the input, the outputs of attention heads are concatenated to generate a d e -sized vector.",
"For the identifiable variant of the Transformer encoder, d v = d e = 512 , this is equal to d s -max to keep it identifiable up to the maximum permissible number of tokens.",
"The outputs of all the heads are then added.",
"Each to-ken's contextualized representations (added head outputs) are then passed through the feed-forward network (2.2).",
"For classification, we use the encoder layer's output for the first token and pass it through a linear classification layer.",
"In datasets with more than two classes, the classifier output is softmax ed.",
"In the case of SNLI, we use the shared encoder for both premise and hypothesis; the output of their first tokens is then concatenated just before the final classification layer.",
"We use Adam optimizer, with learning rate =0.001, to minimize the cross-entropy loss between the target and predicted label.",
"For all the experiments, we keep the batch size as 256 and train for 20 epochs.",
"We report the test accuracy obtained at the epoch with the best validation accuracy.",
"Numerical rank.",
"To generate the numerical rank plot on IMDB dataset as shown in fig.",
"2, we train a separate Transformer encoder-based classifier.",
"For a particular d s value, we sample 100 reviews from the dataset with token length d s and clip each review to the maximum length d s .",
"The clipping will ensure the number of tokens is d s before feeding it to the encoder.",
"The numerical rank is calculated for T 's obtained from the first head of the encoder.",
"For the identifiable variant, similar to 3.1, we plot the numerical rank of T with input sequence length as shown in fig.",
"6.",
"Unlike fig.",
"2, where dim (cid:0) LN( T ) (cid:1) linearly increases after d s = 64 , we find the dimension is zero for a larger d s ( 380 ).",
"The zero dimensional (left) null space of T confirms there exist no nontrivial solution to the constraint constraint-R2, i.e., A = { 0 } .",
"Thus, the attention weights A are identifiable for a larger range of length of the input sequence.",
"It is important that the identifiability of attention weights should not come at the cost of reduced performance of the model.",
"To investigate this issue, we compare the performance of the identifiable Transformer encoder against its regular settings (6) on varied text classification tasks.",
"For the regular setting, as discussed in 4 as one of the solutions, the Transformer can be made identifiable by decreasing the size of the key vector d k .",
"The rows of the Table 1 corresponding to Con denotes regular Transformer setting with varying size of key vector.",
"We observe the classification accuracy at the lower d k is comparable or higher than large d k values, thus, the enhanced identifiability does not compromise with the model's classification accuracy.",
"However, we notice a general performance decline with an increase in the size of the key vector.",
"We speculate that for simple classification tasks, the lower-dimensional projection for key and query vector works well.",
"However, as the task becomes more involved, a higher dimension for the projected subspace could be essential.",
"Nonetheless, as we do not have strong theoretical findings, we leave this observation for future work.",
"Another solution to identifiability is to increase d v to d e and add the heads' outputs.",
"This setting corresponds to the Add rows in the Table 1.",
"For key vector size d k = 1, 2, and 4, We find the identifiable Transformer's performance is comparable Dataset Version Size of key vector ( d k ) 1 2 4 8 16 32 64 128 256 IMDB Con 0.884 0.888 0.886 0.888 0.846 0.824 0.803 0.788 0.755 Add 0.888 0.885 0.887 0.884 0.886 0.882 0.877 0.832 0.825 TREC Con 0.836 0.836 0.840 0.822 0.823 0.764 0.786 0.706 0.737 Add 0.841 0.842 0.835 0.842 0.841 0.836 0.809 0.809 0.771 SST Con 0.643 0.625 0.627 0.609 0.603 0.582 0.574 0.573 0.554 Add 0.599 0.618 0.628 0.633 0.628 0.629 0.592 0.581 0.586 SNLI Con 0.675 0.674 0.673 0.672 0.662 0.659 0.659 0.655 0.648 Add 0.683 0.677 0.674 0.676 0.673 0.669 0.663 0.664 0.655 Yelp Con 0.913 0.911 0.907 0.898 0.879 0.862 0.857 0.849 0.837 Add 0.914 0.915 0.916 0.914 0.915 0.916 0.910 0.909 0.891 DBPedia Con 0.979 0.977 0.977 0.971 0.966 0.961 0.957 0.951 0.949 Add 0.979 0.978 0.979 0.977 0.978 0.973 0.970 0.969 0.964 Sogou Con 0.915 0.907 0.898 0.900 0.893 0.888 0.868 0.858 0.838 Add 0.915 0.908 0.906 0.904 0.913 0.914 0.910 0.906 0.899 AG News Con 0.906 0.903 0.904 0.904 0.886 0.877 0.870 0.870 0.869 Add 0.902 0.908 0.907 0.906 0.897 0.899 0.901 0.897 0.893 Yahoo Con 0.695 0.690 0.684 0.664 0.644 0.627 0.616 0.597 0.574 Add 0.697 0.695 0.696 0.693 0.693 0.694 0.688 0.649 0.683 Amazon Con 0.924 0.925 0.923 0.922 0.900 0.892 0.887 0.882 0.873 Add 0.925 0.923 0.925 0.924 0.924 0.920 0.907 0.896 0.889 Table 1: The test accuracy on varied text classification tasks spread over ten datasets.",
"to the regular settings.",
"For d k 8 , as a general observation, we find the performance of Add does not drop as drastically as Con with an increase in d k .",
"This could be due to the larger size of value vector leading to the more number of parameters in Add that compensate for the significant reduction in the model's accuracy.",
"On the large-scale datasets, we observe that Add performs slightly better than Con .",
"Intuitively, as shown in fig.",
"1, we can increase the size of value vector to increase the dimension of the space on which each token is projected.",
"A higher dimensional subspace can contain more semantic information to perform the specific task.",
"Even though the theoretical analysis shows the possibility of a full row rank of T and identifiable attention weights, the T obtained from a trained model might not contain all the rows linearly independent as d s increases.",
"We can explain this from the semantic similarities between words co-occurring together (Harris, 1954).",
"The similarity is captured as the semantic relationship, such as dot product, between vectors in a linear space.",
"As the number of tokens in a sentence, i.e., d s increases, it becomes more likely to obtain a token vector from the linear combination of other tokens.",
"This work probed Transformer for identifiability of self-attention, i.e., the attention weights can be uniquely identified from the head's output.",
"With theoretical analysis and supporting empirical evidence, we were able to identify the limitations of the existing study by Brunner et al. (2019).",
"We found the study largely ignored the constraint coming from the first phase of self-attention in the encoder, i.e., the size of the key vector.",
"Later, we proved how we can utilize d k to make the attention weights more identifiable.",
"To give a more concrete solution, we propose encoder variants that are more identifiable, theoretically as well as experimentally, for a large range of input sequence lengths.",
"The identifiable variants do not show any performance drop when experiments are done on varied text classification tasks.",
"Future works may analyse the critical impact of identifiability on the explainability and interpretability of the Transformer.",
"This research is supported by A*STAR under its RIE 2020 Advanced Manufacturing and Engineering programmatic grant, Award",
"No. A19E2b0098."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"Abstractive summarization, the task of generating a concise summary of input documents, requires: (1) reasoning over the source document to determine the salient pieces of information scattered across the long document, and (2) composing a cohesive text by reconstructing these salient facts into a shorter summary that faithfully reflects the complex relations connecting these facts.",
"In this paper, we adapt TP-TRANSFORMER (Schlag et al., 2019), an architecture that enriches the original Transformer (Vaswani et al., 2017) with the explicitly compositional Tensor Product Representation (TPR), for the task of abstractive summarization.",
"The key feature of our model is a structural bias that we introduce by encoding two separate representations for each token to represent the syntactic structure (with role vectors ) and semantic content (with filler vectors ) separately.",
"The model then binds the role and filler vectors into the TPR as the layer output.",
"We argue that the structured intermediate representations enable the model to take better control of the contents (salient facts) and structures (the syntax that connects the facts) when generating the summary.",
"Empirically, we show that our TP-TRANSFORMER outperforms the Transformer and the original TPTRANSFORMER significantly on several abstractive summarization datasets based on both automatic and human evaluations.",
"On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and improved syntactic interpretability in the TPR layer outputs.",
"1 1 Introduction Abstractive summarization is the task of generating a shorter version of a source text without necessarily reusing the sentences from the original source, Work partially done while at Microsoft Research.",
"Original Text (Truncated): Authorities said the incident took place on Sao Joao beach in Caparica, south-west of Lisbon .",
"The National Maritime Authority said a middle-aged man and a young girl died after they were unable to avoid the plane.",
"[....] Other reports said the victims had been sunbathing when the plane made its emergency landing .",
"[ ] Video footage from the scene carried by local broadcasters showed a small recreational plane parked on the sand, apparently intact and surrounded by beachgoers and emergency workers.",
"[ ] Reference Summary: A man and a child have been killed after a light aircraft made an emergency landing on a beach in Portugal.",
"while preserving the meaning of its salient contents.",
"It is a complex task that requires: semantic understanding of the source text and reasoning over its lexical units, making inferences about their relation to extract salient facts which are scattered across the long document, as well as generating a concise and coherent sequence of new sentences that covers the salient facts.",
"While humans are remarkably good at this type of reasoning and abstraction, developing models that are capable of extraction, comprehension, abstraction, and reformulation of salient contents has been an open research question.",
"One prominent aspect of abstractive summarization is that models struggle with combining multiple salient aspects in the source text into a coherent and grammatical set of sentences that preserve the original information in the source document.",
"As shown in Fig. 1, these pieces of salient information (death\", emergency landing\", beach\") are often connected by complex syntactic, causal, and temporal relations and are loosely grouped under the main topic of the source document. The transformer models (Vaswani et al., 2017) encode syntactic and semantic information of the input text into a single representation space with the self-attention, and decode the salient aspects into a short summary with the cross-attention. However, despite the large number of training examples, current state-of-the-art transformer based approaches still struggle with systematic generalization of the composition of multiple salient pieces of information. In this paper, we investigate new types of computational primitives for transformers based on Tensor Product Representations (TPRs) (Smolen-sky, 1990) which are explicitly-compositional vector embeddings of symbolic structures. A Tensor Product Representation encodes a constituent in a symbolic structure as a composite of a role , which encodes the structural information (e.g., the dependency relation with another word), and a filler , which encodes the content of the constituent (e.g., the meaning of a word). Analogously, the TP-TRANSFORMER constructs a pair of representations for every token at every layer: a filler vector returned by attention and a novel role vector . As visualized in Fig. 2, the model then binds the role and filler vectors to produce the output of every token as a TPR. We adapt the TP-TRANSFORMER (Schlag et al., 2019), which was proposed for solving mathematics problems, for the task of abstractive summarization. Unlike the original TP-TRANSFORMER , which directly projects the input representation into a continuous role vector space, our model generates the role vectors by attending to a learned dictionary of role embeddings (Palangi et al., 2018). We observe that most learned role attention distributions are approximately one-hot, thus restricting the role vectors to a highly discrete space. This structural inductive bias encourages the TP-TRANSFORMER to encode the syntactic information in the discrete roles while isolating the semantics in the continuous fillers. To test the ability of our TP-TRANSFORMER with discrete roles against the standard Transformer and the TP-TRANSFORMER with continuous roles, we build several models from scratch on a number of summarization datasets spanning different degrees of abstractiveness, output summary lengths, and domains. Our TP-TRANSFORMER significantly outperforms the standard Transformer and the TP-TRANSFORMER with continuous roles on the XSum (Narayan et al., 2018), Wikihow (Koupaee and Wang, 2018), and Arxiv (Co-han et al., 2018) datasets and achieves competitive performance on the CNN/Daily Mail (Hermann et al., 2015; Nallapati et al., 2016) dataset, measured by automatic metrics including ROUGE (Lin, 2004) and METEOR (Denkowski and Lavie, 2014). Our human evaluations on XSum and Wikihow datasets also correlate with the automatic metrics, demonstrating that summaries generated by our TPTRANSFORMER are indeed better than the Transformer's generations. Furthermore, to investigate the structural representation that naturally emerges during training and the advantage of having compositional TPR hidden states, we design a suite of decoder probing tasks to explore the information encoded in the role, filler, and TPR space. We adopt the encoder probing task design presented in Tenney et al. (2019b) and create four decoder probing tasks: Part-of-speech tagging (POS), Dependency Labeling (DEP), Semantic Role Labeling (SRL), and Named Entity Labeling (NEL). Our findings collectively show that the decoder's role vectors encode a wealth of syntactic structures, aiding the decoder in deducing the syntactic features (e.g., being a proper noun, being the object of the root predicate) of the next token to be generated. The decoder's filler vectors on the other hand encode more semantic information (e.g., being a person's name). Furthermore, we observe that having the compositional TPR results in a more interpretable final representation than the original Transformer has at every layer, regarding the syntactic features of the next word to be generated. Our results support our hypothesis that by disentangling semantics and syntax, such structured intermediate representations enable the model to better control both the content to be conveyed and the syntactic structure needed to express it, ultimately improving the factuality and grammaticality of the generated summaries. Our overall contributions are as follows: (1) we present a novel adaptation of the original Transformer architecture that incorporates a dictionary of role embeddings at every layer and generates Tensor Product Representation by binding the role vectors with attention outputs (filler vec-tors); (2) show that our TP-TRANSFORMER outperforms the Transformer as well as the original TP-TRANSFORMER (Schlag et al., 2019) on several abstractive summarization datasets; and (3) demonstrate the emergent structures in representations by revealing the disentangled syntactic and semantic information encoded in the role and filler spaces. 2 The TP-TRANSFORMER We build our TP-TRANSFORMER based on the Transformer architecture used in Raffel et al. (2020). A TP-TRANSFORMER encoder applied to a sequence of tokens i = 1 , ..., I can be seen as a 2-dimensional lattice of cells ( i, l ) where i is the Lin ear V K Q Scaled Do t-Pro du ct Atten tio n Lin ear Lin ear Lin ear Ro le Em beddin gs Ro le-Atten tio n M ulti -H ead A tt e ntion Lin ear TPR Filler s (F) Ro les (R) Co n cat Co n cat Figure 2: The Filler and Role Binding operation of the TPTRANSFORMER Model architecture. position of the input token and l = 1 , ..., L are the layer indices. All cells in the encoder have the same architecture and the cells at the same layer share the same weights. We introduce the basic components of a TP-TRANSFORMER cell in Sec. 2.2 and its encoder and decoder cells in Sec. 2.3. 2.1 Tensor-Product Representation Basics Tensor-Product Representations (TPR; (Smolen-sky, 1990)) are explicitly-compositional vector embeddings of symbolic structures, where each constituent of the structure is represented as the product of a role vector, which encodes its structural information, and a filler vector, which contains the content. The TPR of a whole structure is the sum of the representation of its constituents. To represent any 3-digit number using TPRs, we need three role vectors: { r ( p 1) : Ones place, r ( p 2) : Tens place, r ( p 3) : Hundreds place } and ten filler vectors f for ten digits. For example, the TPR of the number 985 is r ( p 1) f (5)+ r ( p 2) f (8)+ r ( p 3) f (9) , where is the tensor product. When representing a number, the role vectors operate similarly as the positional embeddings in a Transformer (Vaswani et al., 2017). However, when representing natural languages, the role vectors need to encode a variety of structural information (e.g., predicate-argument, tense, etc) and thus it is infeasible to hand-design an entire suite of role vectors as we did for numbers. To overcome this challenge, for every token, we dynamically compute its role vector from a dictionary of a finite number of role embeddings learned with the entire model and treat the self-attention outputs as the fillers. We introduce the full computation procedure in Sec. 2.2.2. 2.2 The TP-TRANSFORMER Cell Similar to the basic Transformer cell, at every layer, a TP-TRANSFORMER Encoder cell starts with a layer normalization and the multi-head self-attention followed by a residual layer. Then, the cell treats the output vectors as fillers and binds them to role vectors to construct a Tensor Product Representation, which is then passed through the feed-forward network to yield the final states. 2.2.1 Multi-Head Attention The TP-TRANSFORMER cell adopts multi-head attention (Vaswani et al., 2017) to enable information passing between tokens. At any layer, denote the input vectors as X R k x d m and the attention target vectors as Y R k y d m , where k x , k y are the length of the sequences and d m is the dimension of the input vectors. In the case of self attention, we have Y = X ; while for the encoder-decoder cross attention, Y is the encoder's output vectors. We first apply layer normalization (Ba et al., 2016) to get X and then linearly project it to the query, key, and value vectors for each attention head h = 1 , ..., H . Q h = X W hq + b hq K h = Y W hk + b hk V h = Y W hv + b hv (1) where W q , W k , W v R d m d k . The attention output matrix V for each head h is computed as: V = softmax( QKT d k ) V (2) where d k is the dimension of the key vectors K . The multi-head attention output O is the concatenation of the attention outputs from all heads followed by another linear projection W o R d m d m . We end the Multi-head Attention with a residual connection with the layer input vectors X : MHAttn( X, Y ) = X + [ V 1 , ..., VH ] W o (3) where V h is the attention output for the h -th head. 2.2.2 Computing TPRs Role Embeddings. Following Palangi et al. (2018), but departing from Schlag et al. (2019), every layer of our TP-TRANSFORMER is equipped with a dictionary r RN r d r of N r distinct role embeddings with a dimension of d r . Each role embedding r n , n =1, . . . , N r , is randomly initialized in the entire network. The role embeddings are normalized before computing role vectors: r n = r n (cid:107) r n (cid:107) 2 for n = 1 , ..., N r (4) At each layer, the model computes a weighted combination of these role embeddings r to form a unique role vector for every token. Multi-Head TPR Binding. Our filler vectors correspond to the multi-head attention output F = MHAttn( X ) (Eqn. 3). The filler F of each token has a corresponding role vector R . We first compute the R h R d r at every head h = 1 , ..., H as a weighted average of the normalized role embeddings r . We then concatenate the R h R k x d r of H heads to get the multi-head role vectors R R k x ( d r H ) for all k x tokens. We define this process formally as: R h = softmax( F W hr ) r R = [ R 1 , ..., RH ] (5) where W r R d m N r is the linear projection that computes the attention scores over the role embeddings for every token. 2 We use a Hadamard product 3 to approximate the full Tensor product in binding the role vectors R with filler vectors F , as it was shown in Schlag et al. (2019) that using the Hadamard products allows learning an optimial lower-rank approximation of the full TPRs. The binding operation is followed by an addition with the unbound fillers ( F ) to return the residual TPR vectors. TPR( F ) = R (cid:12) F + F (6) 2.2.3 Residual Feed-forward Layer The feed-forward layer of a cell consists of a linear projection followed by a ReLU activation and a second linear projection. The feed-forward output is then added to the input vectors: FF( X ) = X +ReLU( X W g + b g ) W f + b f (7) Here, W g R d m d f , b g R d f , W f R d f d m , b f R d m , and x is the function argument. 2 We set d r H = d m so that the multi-head role vectors R have the same dimension as F . 3 The Hadamard (or elementwise) product is the diagonal of the full tensor product. 2.3 TP-TRANSFORMER Encoder & Decoder Given the components of our basic TPTRANSFORMER cell in the previous section, we now describe how we construct the TPTRANSFORMER encoder and decoder. First, the self-attention and the encoder-decoder cross-attention for every token can be computed as: Self( X ) = TPR(MHAttn( X, X )) Cross( Y, H ) = TPR(MHAttn( Y, H )) (8) where H is the output of the encoder's final layer. Y represent the previous layer's output vectors of either the partially (so-far) decoded sequence at test time or the masked reference summary at training time. The encoder and decoder's operations at every layer can be summarized as: Encode( X ) = FF(Self( X )) Decode( H, Y ) = FF(Cross(Self( Y ) , H )) (9) After L layers of encoding and decoding, the final distribution of the i -th output token is given by: z i = softmax( ET y i,L ) (10) where YL = Decode( H, YL 1 ) are the decoder's output states at the last layer and E is the tied input/output word embeddings. 3 Summarization Experiments 3.1 Abstractive Summarization Datasets We train our models on four English abstractive summarization datasets varying the level of abstractiveness (explained below) and the length of summaries, as well as input domain. XSum (Narayan et al., 2018) consists of 227k BBC articles from 2010 to 2017 concerning various subjects along with professionally written single-sentence summaries. Its summaries cover a wide variety of syntactic structures (relative clause, etc) and relations (causal, temporal, etc). Wikihow (Koupaee and Wang, 2018) is a dataset consisting of instructions from the WikiHow.com website. Each of 200k examples has multiple instruction-step paragraphs, each paired with a summarizing sentence. The task is to generate the concatenated summaries of all paragraphs. Datasets Summary XSum Luxury fashion designer Burberry has returned to profit after opening new stores and spending more on online marketing. Wikihow Build a trustworthy bond with your piggy. Research different training methods. Choose the training method that works best for you and your guinea pig. Gather the materials that you will need for training. Arxiv (Abbreviated) We study the phase behavior of a nematic liquid crystal confined between a flat substrate with strong anchoring and a patterned substrate whose structure and local anchoring strength we vary. [...] In addition the effective energy method allows one to determine the energy barriers between two states in a bistable nematic device . CNN/DM Mentally ill inmates in Miami are housed on the \"forgotten floor\". Judge Steven Leifman says most are there as a result of \"avoidable felonies\". While CNN tours facility, patient shouts: \"I am the son of the president\".",
"Arxiv (Cohan et al., 2018) is a long document summarization dataset of scientific publications from arXiv.org (113k).",
"The task is to generate the abstract from the paper body.",
"CNN/Daily Mail (Hermann et al., 2015; Nallapati et al., 2016) dataset contains 93k articles from CNN and 220k articles from the Daily Mail.",
"Every article is accompanied by a few human-written bullet points about its content.",
"We use the non-anonymized version used in See et al. (2017).",
"Dataset Abstractiveness.",
"We show a summary from each of these four datasets in Table",
"1. According to the comparison made by Zhang et al. (2020) using the coverage and density measures (Grusky et al., 2018), the XSum and Wikihow datasets are more abstractive than the others since their summaries rarely contain large chunks of words overlapping with the source documents.",
"CNN/Daily Mail is the least abstractive of the four.",
"Furthermore, in most cases, a sentence in a CNN/Daily Mail summary only refers to a single sentence from the source document as suggested in Lebanoff et al. (2019), while a sentence in an XSum or Wikihow summary usually aggregates information from multiple source sentences.",
"The Transformer and the two TP-TRANSFORMERS all have 6 layers, 8 heads per layer, dimension per head d k =64, model dimension d m =512, and feed-forward dimension d f =2048 for the encoder and decoder.",
"Our TP-TRANSFORMER with discrete roles has N r =50 role embeddings of dimension d r =64 at every layer.",
"For each dataset above, we train the all three models from scratch using an Adafactor Optimizer (Shazeer and Stern, 2018) with square root learning rate decay and dropout rate of 0.1.",
"We evaluate the models using automatic metrics including ROUGE F1 score and METEOR.",
"We report automatic metric scores from our evaluated models in Table",
"2. We refer to the TPTRANSFORMER , with freely-generated continuous role vectors (no role dictionary) (Schlag et al., Datasets Models Grammar Coherency Faithfulness Saliency Repetition Overall XSum Transformer wins 39 48 43 50 38 48 TP-TRANSFORMER wins 47 48 46 47 42 52 Tie / No agreement 34 24 31 23 40 20 Wikihow Transformer wins 45 45 43 54 48 43 TP-TRANSFORMER wins 48 45 46 47 48 59 Tie / No agreement 27 30 31 19 24 18 Table 3: Human Evaluation results on 120 random samples from the XSum (Narayan et al., 2018) and Wikihow (Koupaee and Wang, 2018) test sets.",
"2019) as TPT-c, and our own TP-TRANSFORMER with a discrete set of role embeddings as TPT-d.",
"On the XSum, Arxiv, and Wikihow datasets, our TP-TRANSFORMER (TPT-d) outperforms the original Transformer on all metrics.",
"On the CNN/Daily Mail dataset, both models obtain similar performance across all metrics.",
"On every dataset, the TPT-c model which excels on the mathematics dataset, is the worst among the three models being compared.",
"This suggests that continuous role vectors are not suited to the summarization tasks.",
"As we explain in Sec. 3.1, CNN/Daily Mail is the least abstractive one among the four datasets.",
"In contrast, summaries from the XSum and Wikihow datasets contain very few n-grams (n > 2) that can be copied from the source documents and thus push the model's ability to compose a coherent summary restating the salient aspects from the source.",
"Furthermore, as illustrated in Table 1, the XSum summary contains a long sentence that combines multiple pieces of information scattered through the long source document.",
"These facts are usually connected by syntactic, temporal 4 , or causal 5 relations and thus the model must be able to connect and reason across these salient facts and then convert them into a coherent sentence that faithfully reflects the original facts and their relations.",
"We argue that the compositional TPR can better enable these abilities required for XSum, where we indeed find that our TP-TRANSFORMER achieves the largest advantage over the Transformer among its improvements on all datasets.",
"We conduct human evaluation to compare the summaries generated by the Transformer and our TPTRANSFORMER .",
"We randomly sample 120 examples from the test sets of XSum and Wikihow datasets with the beam-searched model summaries.",
"4 returned to profit after opening new stores\" 5 Opening new stores and spending more on online marketing\" caused \"more profit\".",
"We refer to appendix for the complete setup.",
"As shown in Table 3, on the XSum dataset, summaries generated by the TP-TRANSFORMER are significantly better in grammar.",
"This corroborates our claim that having the TPR can improve the model's ability to follow the correct syntax in composing the summary.",
"On the Wikihow dataset, the Transformer receives more votes in regarding the saliency.",
"However, our TP-TRANSFORMER maintains an advantage in grammar and achieves significantly better overall preferences.",
"Unfaithful XSum Examples It is well-known that the XSum dataset contains a portion of unfaithful reference summaries that mention facts not included in the source article (Durmus et al., 2020; Maynez et al., 2020).",
"Therefore, we are interested to find out whether our TP-TRANSFORMER is better than the baseline only at expressing the faithful content or it can also generate some external, un-faithful\" facts that the baseline can't cover. To answer this question, we randomly sample 100 examples from the XSum dev set and manually examine the source document, reference summary, and the two generated summaries. Among these 100 examples, we identify 71 examples whose reference summary includes unfaithful\" facts that are not mentioned in the source.",
"In 21 out of 71 examples, the Transformer baseline manages to generate some unfaithful\" facts that match those in the reference while our TP-TRANSFORMER achieves this in 17 examples. Such unfaithful\" facts that were recovered by the models include the full name of a person when only the last name is mentioned in the source, the political party or the job title of a person, each of which can be attributed to at least one example seen by models during the training.",
"Therefore, we believe that both models learn to draw external information from its memory of the seen examples, while our TP-TRANSFORMER doesn't do better than the baseline Transformer at referring to external facts to obtain higher ROUGE scores.",
"Probing is a method to test whether some particular information is present in the model's encodings.",
"To achieve this, an auxiliary classifier is trained to predict specified linguistic features from the model's internal representations.",
"We probe different components (roles, filler, TPRs) in our TPTRANSFORMER s as well as the attention+residual outputs (equivalent to the filler) of the Transformer to assess the naturally emergent structures encoded in the role vectors and the effectiveness of the TPR in the decoding process.",
"By conducting the probing experiments, we aim to (1) provide some insights and evidence of the different information encoded by the role and filler vectors; and (2) explain the ROUGE advantage of our TP-TRANSFORMER by showing that its output representation can better encode the linguistic structural information concerning multiple probing tasks.",
"When studying an encoder, previous works probe its i -th intermediate representation at a certain layer for information about the i -th input token For a decoder, however, we probe its i -th representation for clues about the i -th token it generates given the i 1 previously generated tokens as the input.",
"Intuitively, we are probing for the decoder's internal decision about the syntactic roles and semantic content of this token before it was ultimately selected.",
"Based on encoder probing tasks used by Tenney et al. (2019b), we select and adapt four tasks to probe our decoders.",
"Part-of-speech tagging (POS) is the syntactic task of assigning tags such as noun (singular/mass noun: NN, proper noun: NNP, etc), verb (past tense: VBD, past participle: VBN, etc), adjective (comparative: JJR, etc), etc. to each token i .",
"We let s 1 = [ i, i + 1) be a single token, and seek to predict its POS tag.",
"Dependency labeling (DEP) seeks to predict the functional relationships of one token relative to another: e.g. is it a modifier-head relationship, a subject-verb relationship, etc.",
"We take s 1 = [ i, i + 1) to be a single token and s 2 = [ j, j + 1) to be its syntactic head, and seek to predict the dependency relation between tokens i and j .",
"imposing predicate-argument structure onto a sentence.",
"We let s 1 = [ i 1 , j 1 ) represent a known Tasks Layer Transformer TPT-d (Ours) POS 1 -/ 58.4 /58.4 36.1/57.1/58.2 2 -/ 65.4 / 65.4 43.6/63.5/64.4 3 -/ 68.6 /68.3 50.4/67.4/68.5 4 -/70.7/70.7 50.4/70.8/ 72.1 5 -/72.5/72.5 53.4/ 73.3 / 73.9 6 -/73.3/73.3 56.0/ 73.9 / 74.5 DEP 1 -/78.1/78.1 53.1/ 78.8 / 78.9 2 -/85.0/85.0 59.9/84.8/84.7 3 -/87.1/87.1 66.7/87.4/87.3 4 -/87.4/87.4 62.9/ 88.3 / 88.2 5 -/85.0/85.0 64.8/ 88.3 / 87.6 6 -/86.1/86.1 60.8/ 86.8 / 86.6 SRL 1 -/78.2/78.2 73.1/78.5/78.4 2 -/79.0/79.0 73.8/ 79.8 /79.3 3 -/79.6/79.6 73.8/79.9/80.0 4 -/78.7/78.7 73.1/ 80.1 / 80.2 5 -/77.7/77.7 72.9/ 79.9 / 79.8 6 -/78.1/78.1 71.8/ 79.2 /78.2 NEL 1 -/59.7/59.7 33.3/ 61.4 / 60.8 2 -/67.6/67.6 37.6/ 68.1 / 68.2 3 -/69.6/69.6 41.5/ 70.9 / 71.0 4 -/71.8/71.8 43.6/ 74.3 / 73.2 5 -/72.3/72.3 44.7/ 76.3 / 75.7 6 -/73.3/73.3 42.2/ 76.1 / 73.8 Table 4: Results (F1 scores) of probing different intermediate representations in decoders trained on XSum dataset.",
"predicate (e.g., push\") and s 2 = [ i 2 , j 2 ) represent a known argument (Peter\") of that predicate, and seek to predict the role that the argument s 2 fillse.g. ARG0 (agent, the pusher) vs. ARG1 (patient, the pushee).",
"Named entity labeling (NEL) is the task of predicting the category of an entity.",
"The categories include PERSON , LOCATION , ORGANIZATION , etc.",
"We let s 1 = [ i, j ) represent a known entity span and seek to predict its type.",
"As there is no existing dataset for probing decoders, we create our own training and evaluation data by running off-the-shelf models on the summarization datasets.",
"Specifically, to probe a decoder trained on the XSum dataset on the POS task, we run an POS tagger on the reference summaries from the XSum training set and the model-generated summaries for the XSum dev set to create the ground-truth labels for the training set and model-specific dev set.",
"We restore the model trained on a summarization dataset and freeze its parameters.",
"Following Tenney et al. (2019b), we train a span convolution layer followed by a 2-layer MLP on top of the target representation that project it onto the output label space.",
"Table 4 presents the results of probing the decoder of a TP-TRANSFORMER trained on the XSum (Narayan et al., 2018) dataset.",
"Note that the Transformer doesn't have role vectors.",
"It directly outputs the vector after the multi-head attention and the residual layer.",
"Therefore, its fillers and final representations are equivalent.",
"The decoder role vectors can encode grammatical information while the filler vectors represent the semantics.",
"We first focus on the results of POS tagging probing task.",
"Overall, we see a trend of increasing scores as the representations get closer to the final step of computing the distribution over the vocabulary.",
"This implies that, as the computation progresses through the layers, the generated representations are gradually deciding the POS tag of the next word to generate.",
"Next, we observe that the role vectors (the 1st number in the TPT-d column) of TP-TRANSFORMER encode a considerable amount of information about the POS tag of the next word generated.",
"Additionally, because the job of deducing the POS tag of the next word is partially shared by the role vectors, the filler vectors' performance degrades compared to the Transformer.",
"This pattern demonstrates that the TP-TRANSFORMER 's decoder is representing the next word to be generated as a composite of structural information encoded in the role vectors and semantic contents encoded in the filler vectors.",
"Comparing the fillers (the 2nd number in TPT-d column) with the TPR (the 3rd number in the TPT-d column) of TP-TRANSFORMER , we see that the TPRs, which bind the roles and fillers, outperform the roles and fillers alone at every layer.",
"This indicates that the TPR effectively aggregates the linguistic knowledge encoded in the roles and fillers into a shared space, where the POS tag of the next word can be decoded more easily than in the role space or filler space alone.",
"Last, the final representations of TP-TRANSFORMER achieve higher F1 scores than their counterparts in the Transformer in the last three layers.",
"This demonstrates the benefits of having the TPR in interpreting the POS tag of the word to be generated.",
"When we consider the Dependency labeling (DEP) and Semantic role labeling (SRL) tasks, we observe that our TP-TRANSFORMER 's final representations consistently beat the Transformer across all layers, with only one exception in the DEP task at the layer",
"2. We also observe that the TP-TRANSFORMER 's advantage becomes larger in the last three layers except for the final layer in SRL task.",
"However, unlike in the POS task, the TPR only achieve similar F1 scores to the fillers.",
"Finally, in the Named entity labeling (NEL) task which is considered to require more semantic information rather than syntax, the role vectors' performance is poorer than their performance in the three syntactic tasks.",
"For example, the TPTRANSFORMER 's final representations at layer 6 obtain similar F1 scores in the POS and NEL tasks (74.5 VS 73.8), but its role vectors only achieve a 42.2 F1 score in the NEL tasks compared to the 56.0 in the POS.",
"However, even though the role vectors encode little information about the named entity type of the next token to be generated, the TPR still strongly outperforms the Transformer's filler-only representation at every layer.",
"We argue that although the syntactic information encoded in the role vectors is not enough to predict the correct named entity, it is still a beneficial complement to the knowledge encoded in the distributed filler vectors in certain situations.",
"For example, whether the subject Chanel\" refers to a PERSON or an ORGANIZATION could depend on its syntactic role and its relation to other words in the sentence (e.g., whether it is the subject or object of wears) .",
"interpretability of the representations.",
"Overall, by probing the different intermediate representations of the TP-TRANSFORMER and the Transformer, we show that having the compositional TPR results in more interpretable final representations at every layer regarding the syntactic features of the next word to be generated.",
"Considering automatic evaluations generated summaries in Sec. 3.3, we argue that this compositionality in learned representation and its syntactic interpretability enable the decoder to take better control of the syntactic structure of the generation when assembling multiple distant facts, and thus lead to summaries of better quality.",
"During the training of our TP-TRANSFORMER models on the summarization datasets, we observe that most learned role attention distributions are approximately one-hot, as more than 90% of the role attention distributions (as computed in Eqn. 5) have a maximum score larger than 0.98.",
"Because each role vector is the concatenation of H vectors, each selected from N r role embeddings, the completely one-hot role attentions will yield ( N r ) H possible role vectors.",
"Therefore, the learned, approximately one-hot role vectors span ( N r ) H discrete subspaces, each of which only covers the close proximity of a concatenation of H role embeddings.",
"This finding indicates that as we represent the role vectors as multi-head attention over a learnable dictionary of role embeddings, the structural inductive bias: (1) pushes the role vector space to be even more discrete, and (2) induces the syntactic structures encoded in these discrete role vectors.",
"We also believe there is a connection between the above two effects, as the structural, syntactic information favors a lower-dimensional or even discrete space while the distributed, semantic information favors a higher-dimensional space.",
"Explicit TPR Structures in Neural Networks While earlier TPR work based on (Smolensky, 1990) focused on computability rather than learn-ability questions, recently TPRs have been incorporated into several recurrent deep learning models in order to solve various NLP tasks including Part-of-Speech tagging, constituency parsing, image captioning (Huang et al., 2018, 2019), question answering (Palangi et al., 2018; Schlag and Schmid-huber, 2018), and natural-to-formal language generation (program synthesis) (Chen et al., 2020).",
"Most recently, TPRs have been introduced into Transformer architectures, starting with Schlag et al. (2019) which introduced the TP-TRANSFORMER to improve the performance and interpretability of mathematical problem solving models.",
"This model generated continuous role vectors by directly projecting from layer inputs, whereas our model indexes from a dictionary of role embeddings to form the role vectors which are shown to reside in a highly discrete space.",
"Structured Representations for Abstractive Summarization Compared to the extractive methods, abstractive summarization models usually fail to show extractive properties, and have tendency to copy text from the source (See et al., 2017; Paulus et al., 2018; Pasunuru and Bansal, 2018; Celikyilmaz et al., 2018).",
"More recent approaches that use standard transformers deal with this issue by introducing hierarchical structures to encode local and global information separately focusing on only the semantic content (Liu and Lapata, 2018, 2019).",
"To preserve salient source relations and generate abstractive summaries of the source document, previous work infused models with semantic parsers: while Song et al. (2018) introduces a new structure-infused copy mechanism that combines the source syntactic structure with the copy mechanism, Liao et al. (2018) uses abstract meaning representations (AMR).",
"While these approaches require that the document sentence semantic parsers are provided beforehand, our models can implicitly learn to approximate the syntactic structure and semantic content in their representations.",
"In this work, we enrich the Transformer model with the structured Tensor Product Representation for abstractive summarization tasks.",
"We represent every token as a pair of role and filler vectors.",
"We show that our TP-TRANSFORMER with discrete roles outperforms Transformer and TPTRANSFORMER with continuous roles on several abstractive summarization datasets, in both metrics scores and human evaluation.",
"We further demonstrate the syntactic structures encoded in the role vectors and show the improved syntactic interpretability in our model's hidden states.",
"In this work we propose a new encoder-decoder modeling architecture and build several models to benchmark our new architecture with baseline architectures on several open source summarization datasets.",
"Intended use.",
"Our architecture is designed to build models of abstractive summarization.",
"Potentially our architecture could be used to train models for summarizing any type of company internal datasets (e.g., internal documents, reports, meetings, legal forms, etc.) to further improve the productivity and efficiency of the users in their daily activities without needing to read long documents.",
"Failure mode.",
"Even though our models yield factually consistent summaries, as judged by human evaluation, they can still generate factually inconsistent summaries or sometimes hallucinate information that the source document does not include.",
"This might be due to the bias or noise in the training data.",
"Model builders wanting to use our architecture to build models on their company internal datasets should build models with consideration of intellectual properties and privacy rights.",
"Misuse Potential.",
"We note the models to be built with our architecture should be used with careful consideration.",
"The generated summaries produced by our models are not controlled and use generative approaches, therefore, they could generate unreliable text.",
"Researchers working on abstractive summarization should focus on generating factually correct, ethical and reliable text.",
"If our models are trained on news datasets, a careful consideration should be made on factuality of the generated text and measures have been taken to prevent model hallucinations.",
"We thank the reviewers for their helpful comments.",
"This work was partially supported by NSF-CAREER Award 1846185 and a Microsoft Investigator Fellowship."
] | [
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"Constituent parsing has been studied extensively in the last decades.",
"Chomsky-Schutzenberger parsing as an approach to constituent parsing has only been investigated theoretically, yet.",
"It uses the decomposition of a language into a regular language, a homomorphism, and a bracket language to divide the parsing problem into simpler subproblems.",
"We provide the first implementation of Chomsky-Sch utzenberger parsing.",
"It employs multiple context-free grammars and incorporates many refinements to achieve feasibility.",
"We compare its performance to state-of-the-art grammar-based parsers.",
"The description of the syntax of natural languages (such as Danish, English, and German) with the help of formal grammars has been studied since Chomsky (1956).",
"With a formal grammar, computers can calculate a syntactic representation (called parse ) of a sentence in a natural language.",
"Of the grammar classes in the Chomsky hierarchy (Chomsky, 1959), context-free grammars (short: CFGs) lack the expressive power necessary to model natural languages (Shieber, 1985) and parsing with context-sensitive grammars cannot be done efficiently (i.e. in polynomial time).",
"This led to the introduction of a series of classes of mildly context-sensitive grammars (Joshi, 1985) that allow parsing in polynomial time but also capture an increasing amount of phenomena present in natural languages.",
"Tree adjoining grammars (Joshi et al., 1975), linear context-free string-rewriting systems (short: LCFRSs, Vijay-Shanker et al., 1987), and multiple CFGs (short: MCFGs, Seki et al., 1991) are among those classes.",
"Chomsky-Schutzenberger (short: CS) parsing was introduced by Hulden (2011) for CFGs and extended to MCFGs by Denkinger (2017).",
"It uses a classical theorem by Chomsky and Schutzenberger (1963, or the generalisation by Yoshinaka et al., 2010), which states that the language L ( G ) of a CFG (or an MCFG) G can be represented by a regular language R , a homomorphism h , and a Dyck language (resp. multiple Dyck language) D such that L ( G ) = h ( R D ) .",
"The elements of R D correspond to parses in G .",
"For a sentence w , a CS parser calculates the elements of h 1 ( w ) R D and transforms them into parses.",
"CS parsing can be viewed as a coarse-to-fine mechanism where R corresponds to the coarse grammar and R D to the fine grammar.",
"The respective coarse-to-fine pipeline consists of (con-ceptually) simple operations such as h 1 or the intersection with R , which provides great flexibility.",
"The flexibility is used to provide a fallback mechanism in case a finer stage of the pipeline rejects all proposals of a coarser stage.",
"It also permits CS parsing in a broader setting than usual (for parsing) with minimal modification (see sec. 6).",
"We suspected that the coarse-to-fine view on CS parsing leads to an efficient implementation.",
"Since initial tests revealed that the original algorithm for MCFGs (Denkinger, 2017, alg. 3, recalled in sec. 2) is not feasible in practice, we explore numerous optimisations (sec. 4), one of which is the use of a context-free approximation of the multiple Dyck language D .",
"We introduce component-wise derivations (sec. 3) to relate this context-free approximation to D .",
"Employing the optimisations, we provide the first implementation of a CS parser.",
"In sec. 5, we compare our parser's performance to Grammatical Framework (Angelov and Ljunglof, 2014), rparse (Kallmeyer and Maier, 2013), and disco-dop (van Cranenburgh et al., 2016).",
"We restrict our comparison to (discontinuous) grammar-based parsers (excluding e.g. transition systems, Maier, 2015, Coavoux and Crabbe, 2017) since the principle of CS parsing requires a grammar.",
"The sets of non-negative integers and positive integers are denoted by N and N + , respectively.",
"We abbreviate { 1 , . . . , n } by [ n ] for each n N .",
"Let A and B be sets.",
"The powerset of A and the set of (finite) strings over A are denoted by P ( A ) and A , respectively.",
"The set of possibly infinite sequences of elements of A is denoted by A .",
"A partition of A is a set P P ( A ) whose elements (called cells ) are non-empty, pairwise disjoint, and cover A (i.e. (cid:83) p P p = A ).",
"For each a A and each equivalence relation on A , we denoted the equivalence class of a w.r.t. by [ a ] .",
"The set of functions from A to B is denoted by A B .",
"Note that ( A B ) ( A B ) .",
"The composition of two binary relations R 1 and R 2 is R 2 R 1 = { ( a, c ) | b : ( a, b ) R 1 , ( b, c ) R 2 } .",
"Finite state automata.",
"We assume that the reader is familiar with finite state automata.",
"For details, we refer to Hopcroft and Ullman (1979).",
"A finite state automaton (short: FSA) is a tuple A = ( Q, , q i , q f , T ) where Q and are finite sets ( states and terminals , respectively), q i , q f Q ( initial and final state , respectively), and T Q Q is finite ( transitions ).",
"We call q the source and q (cid:48) the target of a transition ( q, u, q (cid:48) ) .",
"A run is a string of transitions such that the target of a transition is the source of the next transition in .",
"The language of A is denoted by L ( A ) .",
"Sorts.",
"Sorts are a widespread concept in computer science: one can think of sorts as data types in a programming language.",
"Let S be a set (of sorts ).",
"An S -sorted set is a tuple ( , sort ) where is a set and sort : S .",
"We abbreviate ( , sort ) by and sort 1 ( s ) by s for s S .",
"Now let be an ( S S ) -sorted set.",
"The set of trees over is the S -sorted set T where (T ) s = { ( t 1 , . . . , t k ) | s 1 , . . . , s k S, ( s 1 s k ,s ) , t 1 (T ) s 1 , . . . , t k (T ) s k } for each s S .",
"Multiple context-free grammars.",
"A rule of a context-free grammar has the ability to concatenate the strings generated by its right-hand side non-terminals.",
"Multiple context-free grammars extend this ability to concatenating stringtuples .",
"This is done with the help of composition functions.",
"Let be a finite set.",
"A composition function w.r.t. is a function c that takes tuples of strings over as arguments and returns a tuple of strings over (i.e. there are k N and s 1 , . . . , s k , s N + such that c : ( ) s 1 . . . ( ) s k ( ) s ), and is defined by an equation c (( x 11 , . . . , x s 1 1 ) , . . . , ( x 1 k , . . . , x s k k )) = ( u 1 , . . . , u s ) where u 1 , . . . , u s are strings of x ji 's and symbols from .",
"We call c linear if each x ji occurs at most once in u 1 u s .",
"We sometimes write [ u 1 , . . . , u s ] instead of c .",
"Furthermore, setting sort ( c ) = ( s 1 s k , s ) , the composition functions w.r.t. form a sorted set.",
"The following example shows how linear composition functions are used in the rules of a multiple context-free grammar.",
"Example 1. Consider G = ( N, , S, P ) where N = { S, A, B } and = { a , b , c , d } are finite sets ( non-terminals and terminals , respectively), S N ( initial non-terminal ) and P is a finite set ( rules ) that contains the following five objects: 1 = S [ x 11 x 12 x 21 x 22 ]( A, B ) 2 = A [a x 11 , c x 21 ]( A ) 4 = A [ , ]() 3 = B [b x 11 , d x 21 ]( B ) 5 = B [ , ]() .",
"We call G a multiple context-free grammar.",
"Consider the rule 1 .",
"Similar to a rule of a context-free grammar, 1 rule has one left-hand side nonterminal ( S ) and zero or more right-hand side non-terminals ( A and B ).",
"A derivation of G can be build by combining rules in P to form a tree according to their leftand right-hand side nonterminals.",
"If a derivation starts with the initial non-terminal (here S ), then it is called complete .",
"Hence, each complete derivation in G has the form d m,n = 1 ( m 2 ( 4 ) , n 3 ( 5 )) for some m, n N .",
"If we replace each rule in a derivation by its composition function, we obtain a term of composition functions which can be evaluated.",
"We call the resulting value the yield of a derivation.",
"A derivation d m,n has yield yd( d m,n ) = a m b n c m d n .",
"The set of yields of all complete derivations is the language of G : L ( G ) = { a m b n c m d n | m, n N } .",
"(cid:3)",
"Definition 2 (Seki et al., 1991) .",
"A multiple context-free grammar (short: MCFG) is a tuple G = ( N, , S, P ) where N is a finite N + -sorted set ( non-terminals ), is a finite set ( terminals ), S N 1 ( initial non-terminal ), P is a finite ( N N ) -sorted set of strings of the form A c ( B 1 , . . . , B k ) such that A, B 1 , . . . , B k N , c is a linear composition function, and sort ( c ) = ( sort ( B 1 ) sort ( B k ) , sort ( A )) .",
"The sort of is ( B 1 B k , A ) .",
"The left-hand side (short: lhs) of is A .",
"The fanout of is fanout( ) = sort ( A ) and the rank of is rank ( ) = k .",
"The elements of P are called rules .",
"The set of derivations (resp. complete derivations) of G is DG = TP (resp.",
"D c G = (TP ) S ).",
"Let w and d = ( d 1 , . . . , d k ) DG with = A c ( B 1 , . . . , B k ) .",
"The yield of d is yd( d ) = c (yd( d 1 ) , . . . , yd( d k )) .",
"The set of derivations of w in G is D c G ( w ) = yd 1 ( w ) D c G .",
"The language of A in G is L ( G, A ) = { yd( d ) | d (TP ) A } .",
"The language of G is L ( G ) = L ( G, S ) .",
"Any language generated by an MCFG is called multiple context-free (short: mcf).",
"(cid:3)",
"A context-free grammar (short: CFG) is an MCFG where each non-terminal has sort 1. Each rule of a CFG has the form A [ u 0 x 1 i (1) u 1 x 1 i ( n ) u n ]( B 1 , . . . , B k ) .",
"We abbreviate this rule by A u 0 B i (1) u 1 B i ( n ) u n .",
"Weighted multiple context-free grammars.",
"A weighted MCFG is obtained by assigning a weight to each rule of an (unweighted) MCFG.",
"In this paper, the weights will be taken from a p artially o rdered c ommutative mo noid with z ero (short: POCMOZ).",
"A POCMOZ is an algebra ( M, (cid:12) , 1 , 0 , (cid:2) ) where (cid:2) is a partial order on M ; (cid:12) is associative, commutative, decreasing (i.e. m (cid:12) m (cid:48) (cid:2) m ), and monotonous (i.e. m 1 (cid:2) m 2 implies m 1 (cid:12) m (cid:2) m 2 (cid:12) m ); 1 is neutral w.r.t. (cid:12) ; and 0 is absorbing w.r.t. (cid:12) .",
"We call M factorisable if for each m M \\ { 1 } , there are m 1 , m 2 M \\ { 1 } with m = m 1 (cid:12) m 2 .",
"The probability algebra Pr = ([0 , 1] , , 1 , 0 , ) is a factorisable POCMOZ where r = r r for each r [0 , 1) .",
"Example 3 (continues ex. 1) .",
"Consider the tuple ( G, ) where : P Pr is a function where ( 1 ) = 1 , ( 2 ) = ( 4 ) = 1 / 2 , ( 3 ) = 1 / 3 , and ( 5 ) = 2 / 3 .",
"We call ( G, ) a weighted MCFG.",
"The weight of the a derivation d m,n is obtained by multiplying the weights of all rule occurrences in it: wt( d m,n ) = 1 / 2 m +1 2 / 3 n +1 .",
"(cid:3)",
"Definition 4. A weighted MCFG (short: wMCFG) is a tuple ( G, ) where G = ( N, , S, P ) is an MCFG ( underlying MCFG ), : P M \\ { 0 } ( weight assignment ), and ( M, (cid:12) , 1 , 0 , (cid:2) ) is a factorisable POCMOZ.",
"( G, ) inherits all objects associated with G .",
"Let d = ( d 1 , . . . , d k ) DG .",
"The weight of d is wt( d ) = ( ) (cid:12) (cid:74) ki =1 wt( d i ) .",
"(cid:3)",
"For the rest of this paper, we fix a wMCFG ( G, ) with underlying MCFG G = ( N, , S, P ) and weight assignment : P M .",
"Chomsky-Schutzenberger theorem.",
"In the Chomsky-Schutzenberger theorem for CFGs (cf. sec. 1), D contains strings of brackets where each opening bracket is matched by the corresponding closing bracket.",
"This property can be described with an equivalence relation.",
"Let be a set (of opening brackets) and s be the set (of closing brackets) that contains s for each .",
"We define as the smallest equivalence relation where u s v uv for each and u, v .",
"The Dyck language w.r.t. is D = [ ] .",
"In the Chomsky-Schutzenberger representation for MCFGs, the brackets fulfil three functions:",
"(i) terminal brackets (cid:74) (cid:75) stand for a terminal symbol ,",
"(ii) component brackets (cid:74) (cid:96) and (cid:75) (cid:96) denote beginning and end of substrings produced by the (cid:96) -th component of a rule , and",
"(iii) variable brackets (cid:74) j,i and (cid:75) j,i denote beginning and end of substrings produced by variable x ji in a rule .",
"As for CFGs, each opening bracket must be matched by the corresponding closing bracket.",
"Furthermore, because applying a rule of an MCFG produces multiple strings simultaneously, we need to ensure that the brackets corresponding to the same application of a rule occur simultaneously.",
"This is described with another equivalence relation.",
"Let P be a partition of .",
"Intuitively, each cell of P is a set of (opening) brackets that occur simultaneously.",
"We define P as the smallest equivalence relation on P (cid:0) ( ) (cid:1) where for each { 1 , . . . , s } P with |{ 1 , . . . , s }| = s , u 0 , . . . , u s , v 1 , . . . , v s D , and L ( ) : (cid:8) u 0 1 v 1 s 1 u 1 s v s s s u s (cid:9) L P (cid:8) u 0 u s , v 1 v s (cid:9) L .",
"The multiple Dyck language w.r.t. P is mD P = (cid:83) ( L | L [ { } ] P ) .",
"Note that mD P D .",
"Theorem 5 provides a representation of each mcf language by a multiple Dyck language (see above), a recognisable language (to ensure local consistency), and a homomorphism (to decode the bracket sequences into terminal strings).",
"The corresponding construction is recalled in def.",
"6. Theorem 5 (cf. Yoshinaka et al., 2010, thm. 3) .",
"For every mcf language L there are a homomorphism h : ( s ) , a regular language R ( s ) , and a multiple Dyck language mD ( s ) such that L = h ( R mD ) .",
"(cid:4)",
"Definition 6 (Denkinger, 2017, def. 3.6, 4.9, 5.15) .",
"The multiple Dyck language w.r.t. G is mD G = mD PG where PG is the smallest set that contains the cell (cid:8) (cid:74) (cid:9) for each and the cells (cid:8) (cid:74) (cid:96) | (cid:96) [ sort ( A )] (cid:9) and (cid:8) (cid:74) j,i | j [ sort ( B i )] (cid:9) for each = A c ( B 1 , . . . , B k ) P and i [ k ] .",
"Let G = (cid:83) p PG p .",
"We denote the elements of G by closing brackets, e.g. s (cid:74) = (cid:75) , and let G = G G .",
"The homomorphism w.r.t. G , denoted by hom G , is the unique extension of h : G { } to strings where h ( ) = if is of the form (cid:74) and h ( ) = otherwise.",
"The automaton w.r.t. G , denoted by AG , is the FSA ( Q, G , S 1 , S 1 , T ) where Q = (cid:8) A (cid:96) , A (cid:96) | A N, (cid:96) [ sort ( A )] (cid:9) and T is the smallest set such that for each rule P of the form A [ u 1 , 0 y 1 , 1 u 1 , 1 y 1 ,n 1 u 1 ,n 1 , . . . , u s, 0 y s, 1 u s, 1 y s,n s u s,n s ]( B 1 , . . . , B k ) where the y s are elements of X and the u s are elements of , we have (abbreviating (cid:74) 1 (cid:75) 2 (cid:74) k (cid:75) k by (cid:94) 1 k ) the following transitions in T :",
"(i) (cid:0) A (cid:96) , (cid:74) (cid:96) (cid:103) u (cid:96), 0 (cid:75) (cid:96) , s A (cid:96) (cid:1) T for every (cid:96) [ s ] with n (cid:96) = 0 ,",
"(ii) (cid:0) A (cid:96) , (cid:74) (cid:96) (cid:103) u (cid:96), 0 (cid:74) j,i , B ji (cid:1) T for every (cid:96) [ s ] where n (cid:96) (cid:54) = 0 and y (cid:96), 1 is of the form x ji ,",
"(iii) (cid:0) s B ji , (cid:75) j,i (cid:103) u (cid:96), (cid:74) j (cid:48) ,i (cid:48) , B j (cid:48) i (cid:48) (cid:1) T for every (cid:96) [ s ] and [ n (cid:96) 1] where y (cid:96), is of the form x ji and y (cid:96), +1 is of the form x j (cid:48) i (cid:48) , and",
"(iv) (cid:0) s B ji , (cid:75) j,i (cid:103) u (cid:96),n (cid:96) (cid:75) (cid:96) , s A (cid:96) (cid:1) T for every (cid:96) [ s ] where n (cid:96) (cid:54) = 0 and y (cid:96),n (cid:96) is of the form x ji .",
"Example 7 (continues ex. 1) .",
"The automaton w.r.t. G is shown in fig.",
"1. An illustration of the application of PG is given in the appendix (p. 11).",
"(cid:3)",
"The vanilla parser.",
"The vanilla parser (i.e. alg. 3 from Denkinger, 2017), is shown in fig.",
"2 (top).",
"Similar to the parser proposed by Hulden (2011), we divide it in three essential phases:",
"(i) FSA constructions for the intersection of hom 1 G ( w ) and RG ,",
"(ii) an extraction of (in our case multiple ) Dyck words from the intersection, and",
"(iii) the conversion of words into derivations.",
"Formally, the vanilla parser is the function V : (D c G ) defined as V = MAP ( TODERIV ) FILTER (mD G ) SORT ( (cid:48) ) ( RG ) hom 1 G",
"where hom 1 G ( w ) RG is represented by an FSA for each w (phase",
"(i)).",
"(cid:48) ( u ) is the product of the weights of each occurrence of a bracket of the form (cid:74) (cid:96) or (cid:75) (cid:96) in u .",
"These weights are fixed such that (cid:48) (cid:0) (cid:74) 1 (cid:75) 1 (cid:74) (cid:96) (cid:75) (cid:96) ) = ( ) for each P with fanout (cid:96) .",
"SORT ( (cid:48) ) brings the elements of its argument, which is a subset of G , in some descending order w.r.t. (cid:48) and (cid:2) , returning a (pos-sibly infinite) sequence of elements of G , which we call candidates .",
"Sequences are implemented as iterators.",
"FILTER (mD G ) removes the candidates from its argument sequence that are not in mD G while preserving the order (cf. Denkinger, 2017, alg. 2).",
"(Both steps, SORT ( (cid:48) ) and FILTER (mD G ) , are phase",
"(ii).) TODERIV returns the derivation in G that corresponds to its argument (which is from the set RG mD G ), cf.",
"Denkinger (2017, function fromBrackets, p. 20).",
"MAP ( TODERIV ) applies TODERIV to each candidate in its argument while preserving the order (phase",
"(iii)).",
"Denkinger (2017, thm. 5.22) showed that TAKE ( n ) V solves the n -best parsing problem.",
"1 We omit the additional restrictions that he imposed on the given wMCFG because they are only necessary to show the termination of his algorithm.",
"1 In the following, we will gloss over the distinction between derivations and parses.",
"In sec. 4, we will outline modifications to the vanilla parser that make the extraction of the elements of mD G from hom 1 G ( w ) RG efficient (items 24).",
"To facilitate this, we first decompose FILTER (mD G ) into FILTER (mD G ) FILTER (D G ) , which is possible because D G mD G .",
"Secondly, we implement FILTER (D G ) SORT ( (cid:48) ) with a dynamic programming algorithm (cf. Hulden, 2011, alg. 1, similar to Bar-Hillel et al., 1961, sec. 8).",
"And lastly, we replace FILTER (mD G ) by steps that exploit the well-bracketing of the elements of D G .",
"The elements of RG D G can be represented as trees over rules of G .",
"2 We label the edges of those trees to allow us to check if vertices that correspond to the same application of a rule of the MCFG G match.",
"The resulting objects are called component-wise derivations .",
"The set RG D G is characterised in terms a CFG G cf .",
"Definition 8. Let P be a rule of the form A [ u 1 , . . . , u s ]( B 1 , . . . , B k ) , (cid:96) [ s ] , and u (cid:96) be of the form w 0 x j (1) i (1) w 1 x j ( n ) i ( n ) w n for some w 0 , . . . , w n .",
"We define the rule ( (cid:96) ) = A (cid:96) (cid:113) (cid:96) (cid:102) w 0 v 1 (cid:102) w 1 v n (cid:102) w n (cid:121) (cid:96) where each v = (cid:74) j ( ) ,i ( ) B j ( ) i ( ) (cid:75) j ( ) ,i ( ) .",
"The context-free CS approximation of G (short: CFA ), denoted by G cf , is the CFG ( N cf , G , S 1 , P cf ) where N cf = { A (cid:96) | A N, (cid:96) [ sort ( A )] } and P cf = { ( (cid:96) ) | P, (cid:96) [fanout( )] } .",
"(cid:3)",
"Observation 9. D G RG = L ( G cf ) .",
"(cid:4)",
"Definition 10. Let (cid:96) N + and t be a tree whose vertices are labelled with elements of P and whose edges are labelled with elements of N + N + .",
"The label at the root of t is denoted by root( t ) .",
"The set of labels of the outgoing edges from the 2 Those trees correspond to the derivations of the guiding grammar in the coarse-to-fine parsing approach of Barth elemy et al. (2001, sec. 3).",
"root of t is denoted by out( t ) .",
"A ( i, j ) -subtree of t , is a sub-graph of t consisting of all the vertices (and their edges) reachable from some target vertex of the outgoing edge from the root that is labelled with ( i, j ) .",
"If there is a unique ( i, j ) subtree of t , then we denote it by sub ( i,j ) ( t ) .",
"Now let root( t ) = A [ u 1 , . . . , u s ]( B 1 , . . . , B k ) .",
"We call t an ( (cid:96) -)component-wise derivation, short: ( (cid:96) -)cow derivation, of G if the following four requirements are met:",
"(i) out( t ) contains exactly the pairs ( i, j ) such that x ji occurs in u (cid:96) ,",
"(ii) a unique ( i, j ) -subtree of t exists,",
"(iii) root(sub ( i,j ) ( t )) has lhs B i , and",
"(iv) sub ( i,j ) ( t ) is a j -cow derivation for each ( i, j ) out( t ) .",
"We denote the set of cow derivations of G whose root's lhs is S by cowD c G .",
"The set of (cid:96) -cow derivations whose root's label has lhs A is denoted by (cid:96) cowD AG .",
"(cid:3)",
"An example of a cow derivation is shown in fig.",
"3a.",
"The root is the top-most vertex.",
"Definition 11. Let = A c ( B 1 , . . . , B k ) P , (cid:96) [fanout( )] , and the (cid:96) -th component of c be u 0 x j (1) i (1) u 1 x j ( n ) i ( n ) u n with u 1 , . . . , u n .",
"Furthermore, for each [ n ] , let t j ( ) cowD B i ( ) G .",
"By (cid:10) ( i ( ) , j ( )) /t | [ n ] (cid:11) , we denote the cow derivation t such that root( t ) = , out( t ) = { ( i ( ) , j ( )) | [ n ] } , and for each [ n ]: sub ( i ( ) ,j ( )) ( t ) = t .",
"(cid:3)",
"Proof sketch.",
"We define the partial function toCowD from G to cow derivations of G as follows: toCowD( u ) = (cid:10) ( i ( ) , j ( )) / toCowD( v ) | [ n ] (cid:11) if u is of the form (cid:114) (cid:96) (cid:102) u 0 (cid:74) j (1) ,i (1) v 1 (cid:75) j (1) ,i (1) (cid:102) u 1 . . . (cid:74) j ( n ) ,i ( n ) v n (cid:75) j ( n ) ,i ( n ) (cid:102) u n (cid:122) (cid:96) for some rule = A c ( B 1 , . . . , B k ) where the (cid:96) -th component of c is u 0 x j (1) i (1) u 1 x j ( n ) i ( n ) u n with u 1 , . . . , u n ; otherwise, toCowD( u ) is undefined.",
"The partial function toCowD is a bijection between L ( G cf ) and cowD c G (proven in appendix A.2).",
"(cid:4)",
"Example 13 (continues ex. 1) .",
"We construct G cf = ( { S 1 , A 1 , A 2 , B 1 , B 2 } , G , S 1 , P cf ) where P cf contains, among others, the following rules: (1)1 = S 1 (cid:114) 1 1 (cid:113) 1 1 , 1 A 1 (cid:121) 1 1 , 1 (cid:113) 1 1 , 2 B 1 (cid:121) 1 1 , 2 (cid:113) 2 1 , 1 A 2 (cid:121) 2 1 , 1 (cid:113) 2 1 , 2 B 2 (cid:121) 2 1 , 2 (cid:122) 1 1 , (1)3 = B 1 (cid:114) 1 3 (cid:101) b (cid:113) 1 3 , 1 B 1 (cid:121) 1 3 , 1 (cid:122) 1 3 , (1)4 = A 1 (cid:74) 1 4 (cid:75) 1 4 , (2)4 = A 2 (cid:74) 2 4 (cid:75) 2 4 , (1)5 = B 1 (cid:74) 1 5 (cid:75) 1 5 , . . . .",
"Figure 3a shows the image of the word (cid:114) 1 1 (cid:114) 1 1 , 1 (cid:113) 1 4 (cid:121) 1 4 (cid:122) 1 1 , 1 (cid:114) 1 1 , 2 (cid:113) 1 3 (cid:101) b (cid:113) 1 3 , 1 (cid:74) 1 5 (cid:75) 1 5 (cid:121) 1 3 , 1 (cid:121) 1 3 (cid:122) 1 1 , 2 (cid:114) 2 1 , 1 (cid:113) 2 4 (cid:121) 2 4 (cid:122) 2 1 , 1 (cid:114) 2 1 , 2 (cid:113) 2 5 (cid:121) 2 5 (cid:122) 2 1 , 2 (cid:122) 1 1 in L ( G cf ) under toCowD .",
"In the following, we define a property called consistency to discern those cow derivations that correspond to derivations of the MCFG G .",
"Definition 14. Let s N + and t 1 , . . . , t s be cow derivations of G .",
"We call the set { t 1 , . . . , t s } consistent if there is a rule = A c ( B 1 , . . . , B k ) P such that root( t 1 ) = . . . = root( t s ) = , s = sort ( A ) , and for each i [ k ] : the set { sub ( i,j ) ( t (cid:96) ) | (cid:96) [ s ] , j [ sort ( B i )]: ( i, j ) out( t (cid:96) ) } is consistent.",
"If s = 1 , then we also call t 1 consistent .",
"(cid:3)",
"The cow derivation shown in fig.",
"3a is not consistent.",
"If we consider the set of nodes that is reachable from the root via edges labelled with a tuple whose first component is 2 (the right dotted box), then it is easy to see that the rules at these nodes are not equal.",
"A consistent cow derivation is shown in the appendix (fig. 6).",
"In this section, we describe several improvements to the vanilla parser (cf. end of sec. 2).",
"Since the definitions of AG , hom G , and mD G do not depend on the word w , we may compute appropriate representations for these objects before the beginning of the parsing process, and store them persistently.",
"(a) A cow derivation.",
"The dotted boxes show clusters of nodes that are reachable from the root via edges labelled with matching first components.",
"(b) Construction of new rules for each cluster in fig.",
"3a.",
"If there were any unused nonterminals in these constructed rules, they are removed and the indices of variables changed accordingly.",
"For each cluster, all reachable nodes are clustered via the first component of the labels as in fig.",
"3a.",
"(c) Construction of new rules from the clusters in fig.",
"3b.",
"In the following, we briefly describe each improvement that we applied to the vanilla parser: 1. Let us call a rule in G w -consistent if each string of terminals that occurs in (the composition function of) is a substring of w .",
"A rule is called useful w.r.t. w if it occurs in some complete derivation of G in which each rule is w -consistent.",
"In the construction of the FSA for RG hom 1 G ( w ) , we only calculate the transitions that relate to rules of G that are useful w.r.t. w .",
"2. The function FILTER (mD G ) is decomposed into FILTER (mD G ) FILTER (D G ) in preparation for the next two items.",
"3. FILTER (D G ) SORT ( (cid:48) ) is implemented with the algorithm EXTRACTDYCK ( G, (cid:48) ) that uses dynamic programming to extract Dyck words from the language of the given FSA more efficiently.",
"For this, we extend alg.",
"1 by Hulden (2011) to use weights such that it returns the elements in descending order w.r.t. (cid:48) and (cid:2) (see appendix A.3, alg. 3).",
"In our implementation, we change this al-Algorithm 1 reads off cow derivations from words of the CFA of G .",
"gorithm even further such that items are explored in a similar fashion as in the CKY-algorithm (Kasami, 1966; Younger, 1967; Cocke and Schwartz, 1970).",
"4. For FILTER (mD G ) , instead of isMember (cid:48) by Denkinger (2017, p. 2830), which runs in quadratic time, we use the composition of two algorithms that run in linear time: alg.",
"1, which reads a cow derivation off a given word in RG D G , and an algorithm that checks a given cow derivation for consistency.",
"(This is similar to alg. 2; but instead of derivations, we return Boolean values. The algorithm is given explicitly in sec. A.3.) 5. Algorithm 2 computes the bijection between cowD c G and D c G (see prop. 15).",
"Analogously to def.",
"14, the function TOMCFGDERIV ' checks a set of cow derivations for equivalence of the root symbol and the function COLLECTCHILDREN groups the subtrees via the first component of the successor labels.",
"It is easy to see that TOMCFGDERIV ( t ) is only defined if the cow derivation t is consistent (cf. item 4).",
"Thus, we use TOMCFGDERIV in combination with TOCOWDERIV to replace MAP ( TODERIV ) FILTER (mD G ) .",
"The time complexity of alg.",
"2 is linear in the Algorithm 2 converts a consistent element of cowD c G into a complete derivation of G .",
"number of vertices of the given cow derivation.",
"This number, in turn, is linear in the length of the processed candidate.",
"The parser obtained by applying items 1 to 5 to the vanilla parser is visualised in fig.",
"2 (bottom).",
"It is sound and complete.",
"3 The following two modifications (items 6 and 7) destroy both soundness and completeness.",
"Item 6 allows only the best intermediate results to be processed further and limits the results to a subset of those of the vanilla parser.",
"In item 7, we compensate this by an approximation we consider useful in practise.",
"6. EXTRACTDYCK is extended with an optional implementation of beam search by limiting the amount of items for certain groups of state spans to a specific number ( beam width ), cf.",
"Collins (1999).",
"In our implementation, we chose these groups of state spans such that they correspond to equal states in 3 A parser is complete if it (eventually) computes all complete derivations of the given word in the given grammar.",
"A parser is called sound if all computed parses are complete derivations of the given word in the given grammar.",
"the automaton for hom 1 G ( w ) .",
"Moreover, we introduce a variable that limits the number of candidates that are yielded by Algorithm 3 ( candidate count ).",
"Both variables are the meta-parameters of our parser.",
"7. We introduce a fallback mechanism for the case that FILTER (mD G ) has input candidates but an empty output.",
"Usually, in that case, we would suggest there is no derivation for w in G , yet for robustness, it is preferable to output some parse.",
"Figure 3 illustrates a strategy to construct a complete derivation from any complete cow derivation with an example.",
"We implemented the parser with the modifications sketched in sec. 4 for -free and simple wMCFGs, 4 but no problems should arise generalising this implementation to arbitrary wMCFGs.",
"The implementation is available as a part of Rustomata, 5 a framework for weighted automata with storage written in the programming language Rust.",
"We used the NeGra corpus (German newspaper articles, 20,602 sentences, 355,096 tokens; Skut et al., 1998) to compare our parser to Grammatical Framework (Angelov and Ljunglof, 2014), rparse (Kallmeyer and Maier, 2013), and discodop (van Cranenburgh et al., 2016) with respect to parse time and accuracy.",
"6 Our experiments were conducted on defoliated trees, i.e. we removed the leaves from each tree in the corpus.",
"Parsing was performed on gold part-of-speech tags.",
"We performed a variant of ten-fold cross validation (short: TFCV; cf. Mosteller and Tukey, 1968), i.e. we split the corpus into ten consecutive parts; each part becomes the validation set in one iteration while the others serve as training set.",
"We used the first iteration to select suitable values for our meta-parameters and the remaining nine for validation.",
"In case of Rustomata, a binarised and markovized grammar was induced with discodop (head-outward binarisation, v = 1 , h = 2 , cf. Klein and Manning, 2003) in each iteration.",
"For all other parsers, we induced a proba-4 A wMCFG G is called -free and simple if each composition function that occurs in the rules of G is either of the form [ u 1 , . . . , u s ] for some non-empty strings of variables u 1 , . . . , u s , or of the form [ t ] for some terminal symbol t .",
"5 available on https://github.com/tud-fop/ rustomata .",
"We used commit 867a451 for evaluation.",
"6 The evaluation scripts are available on https:// github.com/truprecht/rustomata-eval .",
"bilistic LCFRS with the respective default config-urations (for details, cf. the evaluation scripts).",
"After that, we ran our parser on each sentence of the validation set and recorded the parse time and the computed 1-best parse.",
"The computed parses were evaluated against the gold parses of the validation set w.r.t. precision, recall, and f 1 -score (according to the labelled parseval measures, cf. Black et al., 1991; Collins, 1997, we used the implementation by van Cranenburgh et al., 2016).",
"Previous experiments with an implementation of the vanilla parser already struggled with small subsets (we used grammars extracted from 250 1500 parse trees) of the NeGra corpus.",
"Therefore, we omit evaluation of the vanilla parser.",
"Meta-parameters.",
"A grid search for meta-parameters was performed on sentences of up to 20 tokens (see the appendix, tab. 2, for a detailed listing).",
"The results suggested to set the beam width to 200 and the candidate count to 10,000.",
"Comparison to other parsers.",
"The experiments were performed on sentences with up to 30 tokens.",
"We instructed rparse, Grammatical Framework (short: GF) and Rustomata (short: OP) to stop parsing each sentence after 30 seconds ( timeout ).",
"Disco-dop did not permit passing a timeout.",
"In the case of disco-dop's LCFRS parser (short: ddlcfrs), we limited the validation set to sentences parser precision recall f 1 -score coverage NeGra corpus, | w | 20 ddctf-dop 81 .",
"of at most 20 tokens, since ddlcfrs frequently exceeded 30 seconds of parse time for longer sentences in preliminary tests.",
"Disco-dop's coarse-to-fine data-oriented parser (short: ddctf-dop) and disco-dop's coarse-to-fine LCFRS parser (short: ddctf-lcfrs) rarely exceeded 30 seconds of parse time in preliminary tests and we let them run on sentences of up to 30 tokens without the timeout.",
"Figure 4a shows the parse times for each sentence length and parser.",
"The parsers ddctf-dop, ddctf-lcfrs, GF, and OP perform similar for sentences of up to 20 tokens.",
"The parse times of rparse and ddlcfrs grow rapidly after 10 and 16 tokens, respectively.",
"Rparse even exceeds the timeout for more than half of the test sentences that are longer than 15 tokens.",
"For sentences with up to 30 tokens, the parse times of ddctf-dop, ddctf-lcfrs and OP seem to remain almost constant.",
"Table 1 shows the accuracy (i.e. precision, recall, and f 1 -score) and the coverage (i.e. the percentage of sentences that could be parsed) for each parser on the validation set.",
"We report these scores to assert a correct implementation of our parser and to compare the different approximation strategies (and our fallback mechanism) implemented in the parsers.",
"The low coverage of rparse stems from the frequent occurrences of timeouts.",
"They also depress the recall for rparse.",
"For sentences with at most 20 tokens, ddlcfrs, ddctf-lcfrs and OP perform very similar.",
"These three parsers are outperformed by ddctf-dop in all aspects.",
"For sentences of up to 30 tokens, the scores of all tested parsers drop similarly.",
"However, ddctf-dop's scores drop the least amount.",
"We repeated a part of the experiments with the Lassy corpus (Lassy Small, various kinds of written Dutch, 65,200 sentences, 975,055 tokens; van Noord et al., 2013).",
"Since it is considerably larger than the NeGra corpus, we limited the experiments to one iteration of TFCV, and we only investigate OP, ddctf-lcfrs, and ddctf-dop.",
"The results are shown in fig.",
"4b (parse time) and at the bottom of tab.",
"1 (accuracy).",
"Figure 4b shows the difference of ddctf-lcfrs, ddctf-dop and OP in terms of parse times (which is not discernible in fig. 4a).",
"This plot shows that OP maintains very small parse times even for large copora compared to the state-of-the-art parser disco-dop.",
"All in all, our parser performs comparable to state-of-the-art MCFG parsers (GF, rparse, ddlcfrs, ddctf-lcfrs) and, using the NeGra corpus, it shows excellent results in parse time and good results in accuracy.",
"Moreover, our parser can deal with any -free and simple MCFG provided by an external tool, making it more flexible than discodop and rparse.",
"However, we are not able to compete with ddctf-dop in terms of accuracy, since discontinuous data-oriented parsing is a more accurate formalism (van Cranenburgh and Bod, 2013).",
"We see potential to improve the fallback mechanism explained in sec. 4. For now, we only considered reporting the first cow derivation.",
"By introducing some degree of consistency of cow derivations, we could select a cow derivation that is closer to a derivation of G .",
"Since recognisable languages are closed under inverse homomorphisms, we can use any recognisable language as input for hom 1 G (cf. fig. 2) without changing the rest of the pipeline.",
"This is useful when the input of the parsing task is ambiguous, as in lattice-based parsing (e.g. Goldberg and Tsarfaty, 2008).",
"Moreover, since weighted recognisable languages are closed under inverse homomorphisms and scalar product, we can even use a weighted recognisable language as input for hom 1 G , as in the setting of Rastogi et al. (2016).",
"We thank our colleague Kilian Gebhardt as well as the anonymous reviewers for their insightful comments on drafts of this paper."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other"
] |
[
"Transformer language models have shown remarkable ability in detecting when a word is anomalous in context, but likelihood scores offer no information about the cause of the anomaly.",
"In this work, we use Gaussian models for density estimation at intermediate layers of three language models (BERT, RoBERTa, and XLNet), and evaluate our method on BLiMP, a grammaticality judgement benchmark.",
"In lower layers, surprisal is highly correlated to low token frequency, but this correlation diminishes in upper layers.",
"Next, we gather datasets of morphosyntactic, semantic, and commonsense anomalies from psycholinguistic studies; we find that the best performing model RoBERTa exhibits surprisal in earlier layers when the anomaly is morphosyntactic than when it is semantic, while commonsense anomalies do not exhibit surprisal at any intermediate layer.",
"These results suggest that language models employ separate mechanisms to detect different types of linguistic anomalies.",
"Transformer-based language models (LMs) have achieved remarkable success in numerous natural language processing tasks, prompting many probing studies to determine the extent of their linguistic knowledge.",
"A popular approach is to formulate the problem as a multiple-choice task, where the LM is considered correct if it assigns higher likelihood to the appropriate word than an inappropriate one, given context (Gulordava et al., 2018; Ettinger, 2020; Warstadt et al., 2020).",
"The likelihood score, however, only gives a scalar value of the degree that a word is anomalous in context, and cannot distinguish between different ways that a word might be anomalous.",
"It has been proposed that there are different types of linguistic anomalies.",
"Chomsky T h e c a t w o n ' t e a t i n g t h e f oo d 0 1 2 3 4 5 6 7 8 9 10 11 12 L a y e r T h e p l a n e l a u g h e d a t t h e r un w a y 0 1 2 3 4 5 6 7 8 9 10 11 12 Figure 1: Example sentence with a morphosyntactic anomaly (left) and semantic anomaly (right) (anoma-lies in bold).",
"(1957) distinguished semantic anomalies ( color-less green ideas sleep furiously ) from ungrammaticality ( furiously sleep ideas green color-less ).",
"Psycholinguistic studies initially suggested that different event-related potentials (ERPs) are produced in the brain depending on the type of anomaly; e.g., semantic anomalies produce negative ERPs 400 ms after the stimulus, while syntactic anomalies produce positive ERPs 600 ms after (Kutas et al., 2006).",
"Here, we ask whether Transformer LMs show different surprisals in their intermediate layers depending on the type of anomaly.",
"However, LMs do not compute likelihoods at intermediate layers only at the final layer.",
"In this paper, we introduce a new tool to probe for surprisal at intermediate layers of BERT (De-vlin et al., 2019), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019), formulating the problem as density estimation.",
"We train Gaussian models to fit distributions of embeddings at each layer of the LMs.",
"Using BLiMP (Warstadt et al., 2020) for evaluation, we show that this model is effective at grammaticality judgement, requiring only a small amount of in-domain text for training.",
"Figure 1 shows the method using the RoBERTa model on two example sentences.",
"We apply our model to test sentences drawn from BLiMP and 7 psycholinguistics studies, exhibiting morphosyntactic, semantic, and commonsense anomalies.",
"We find that morphosyntactic anomalies produce out-of-domain embeddings at earlier layers, semantic anomalies at later layers, and no commonsense anomalies, even though the LM's final accuracy is similar.",
"We show that LMs are internally sensitive to the type of linguistic anomaly, which is not apparent if we only had access to their softmax probability outputs.",
"Our source code and data are available at: https://github.com/SPOClab-ca/ layerwise-anomaly .",
"Soon after BERT's release, many papers invented probing techniques to discover what linguistic knowledge it contains, and how this information is distributed between layers (e.g., Rogers et al. (2021) provides a comprehensive overview).",
"Tenney et al. (2019) used edge probing to determine each layer's contribution to a task's performance, and discovered that the middle layers contributed more when the task was syntactic, and the upper layers more when the task was semantic.",
"Several papers found that BERT's middle layers contain the most syntactic information.",
"Kelly et al. (2020) found that BERT's middle layers are best at distinguishing between sentences with direct and indirect object constructions.",
"Hewitt and Manning (2019) used a structural probe to recover syntax trees from contextual embeddings, and found the performance peaked in middle layers.",
"Probing results are somewhat dependent on the choice of linguistic formalism used to annotate the data, as Kulmizev et al. (2020) found for syntax, and Kuznetsov and Gurevych (2020) found for semantic roles.",
"Miaschi et al. (2020) examined the layerwise performance of BERT for a suite of linguistic features, before and after fine tuning.",
"Our work further investigates what linguistic information is contained in different layers, with a focus on anomalous inputs.",
"Many recent probing studies used grammaticality judgement tasks to test the knowledge of spe-cific phenomena in LMs.",
"Warstadt et al. (2019) gathered sentences from linguistic publications, and evaluated by Matthews Correlation with the ground truth.",
"More commonly, the model is presented with a binary choice between an acceptable and unacceptable sentence: BLiMP (Warstadt et al., 2020) used templates to generate 67k such sentence pairs, covering 12 types of linguistic phenomena.",
"Similarly, Hu et al. (2020) created syntactic tests using templates, but defined success criteria using inequalities of LM perplexities.",
"In contrast with artificial templates, Gulordava et al. (2018) generated test cases by perturbing natural corpus data to test long-distance dependencies.",
"Most grammaticality studies focused on syntactic phenomena, but Rabinovich et al. (2019) tested LMs' sensitivity to semantic infelicities involving indefinite pronouns.",
"Violations of selectional restrictions are one type of linguistic unacceptability, defined as a semantic mismatch between a verb and an argument.",
"Sasano and Korhonen (2020) examined the geometry of word classes (e.g., words that can be a direct object of the verb play') in word vector models; they compared single-class models against discriminative models for learning word class boundaries.",
"Chersoni et al. (2018) tested distributional semantic models on their ability to identify selectional restriction violations using stimuli from two psycholinguistic datasets.",
"Finally, Metheniti et al. (2020) tested how much BERT relies on selectional restriction information versus other contextual information for making masked word predictions.",
"The N400 response is a negative event-related potential that occurs roughly 400ms after a stimulus in human brains, and is generally associated with the stimulus being semantically anomalous with",
"respect to the preceding context (Kutas and Federmeier, 2011).",
"Although many studies have been performed with a diverse range of linguistic stimuli, exactly what conditions trigger the N400 response is still an open question.",
"Frank et al. (2015) found that the N400 response is correlated with surprisal, i.e., how unlikely an LM predicts a word given the preceding context.",
"Recently, several studies have investigated relationships between surprisal in neural LMs and the N400 response.",
"Michaelov and Bergen (2020) compared human N400 amplitudes with LSTM-based models using stimuli from several psycholinguistic studies.",
"Ettinger (2020) used data from three psycholinguistic studies to probe BERT's knowledge of commonsense and negation.",
"Our work is similar to the latter we leverage psycholinguistic studies for their stimuli, but we do not use the their N400 amplitude results.",
"We use the transformer language model as a contextual embedding extractor (we write this as BERT for convenience).",
"Let L be the layer index, which ranges from 0 to 12 on all of our models.",
"Using a training corpus { w 1 , , w T } , we extract contextual embeddings at layer L for each token: x ( L ) 1 , , x ( L ) T = BERTL ( w 1 , , w T ) .",
"(1) Next, we fit a multivariate Gaussian on the extracted embeddings: x ( L ) 1 , , x ( L ) T N ( b L , b L ) .",
"(2) For evaluating the layerwise surprisal of a new sentence s = [ t 1 , , t n ] , we similarly extract contextual embeddings using the language model: y 1 , , y n = BERTL ( t 1 , , t n ) .",
"(3) The surprisal of each token is the negative log likelihood of the contextual vector according to the multivariate Gaussian: G i = log p ( y i | b L , b L ) for i = 1 . . . n.",
"(4) Finally, we define the surprisal of sentence s as the sum of surprisals of all of its tokens, which is also the joint log likelihood of all of the embeddings: surprisal L ( t 1 , , t n ) = n X i =1 G i = log p ( y 1 , , y n | b L , b L ) .",
"The theoretical motivation for using the sum of log likelihoods is that when we fit a Gaussian model with full covariance matrix, low likelihood corresponds exactly to high Mahalanobis distance from the in-distribution points.",
"The score given by the Gaussian model is: G = log p ( y | b L , b L ) = log 1 (2 ) D/ 2 | b L | 1 / 2 exp( 1 2 d 2 ) !",
", (6) where D is the dimension of the vector space, and d is the Mahalanobis distance: d = q ( y b L ) T b 1 L ( y b L ) .",
"b thus the negative log likelihood is the squared Mahalanobis distance plus a constant.",
"Various methods based on Mahalanobis distance have been used for anomaly detection in neural networks; for example, Lee et al. (2018) proposed a similar method for out-of-domain detection in neural classification models, and Cao et al. (2020) found the Mahalanobis distance method to be competitive with more sophisticated methods on medical out-of-domain detection.",
"In Transformer models, Podolskiy et al. (2021) used Mahalanobis distance for out-of-domain detection, outperforming methods based on softmax probability and likelihood ratios.",
"Gaussian assumptions.",
"Our model assumes that the embeddings at every layer follow a multivariate Gaussian distribution.",
"Since the Gaussian distribution is the maximum entropy distribution given a mean and covariance matrix, it makes the fewest assumptions and is therefore a reasonable default.",
"Hennigen et al. (2020) found that embeddings sometimes do not follow a Gaussian distribution, but it is unclear what alternative distribution would be a better fit, so we will assume a Gaussian distribution in this work.",
"For all of our experiments, we use the base' versions of pretrained language models BERT (De-vlin et al., 2019), RoBERTa (Liu et al., 2019), and",
"XLNet (Yang et al., 2019), provided by Hugging-Face (Wolf et al., 2020).",
"Each of these models have 12 contextual layers plus a 0 th static layer, and each layer is 768-dimensional.",
"We train the Gaussian model on randomly selected sentences from the British National Corpus (Leech, 1992), representative of acceptable English text from various genres.",
"We evaluate on BLiMP (Warstadt et al., 2020), a dataset of 67k minimal sentence pairs that test acceptability judgements across a variety of syntactic and semantic phenomena.",
"In our case, a sentence pair is considered correct if the sentence-level surprisal of the unacceptable sentence is higher than that of the acceptable sentence.",
"How much training data is needed?",
"We experiment with training data sizes ranging from 10 to 10,000 sentences (Figure 2a).",
"Compared to the massive amount of data needed for pretraining the LMs, we find that a modest corpus suf-fices for training the Gaussian anomaly model, and a plateau is reached after 1000 sentences for all three models.",
"Therefore, we use 1000 training sentences (unless otherwise noted) for all subsequent experiments in this paper.",
"Which layers are sensitive to anomaly?",
"We vary L from 0 to 12 in all three models (Figure 2b).",
"The layer with the highest accuracy differs between models: layer 9 has the highest accuracy for BERT, 11 for RoBERTa, and 6 for XLNet.",
"All models experience a sharp drop in the last layer, likely because the last layer is specialized for the MLM pretraining objective.",
"Comparisons to other models.",
"Our best-performing model is RoBERTa, with an accuracy of 0.830.",
"This is slightly higher the best model reported in BLiMP (GPT-2, with accuracy 0.801).",
"We do not claim to beat the state-of-the-art on BLiMP: Salazar et al. (2020) obtains a higher accuracy of 0.865 using RoBERTa-large.",
"Even though the main goal of this paper is not to maximize accuracy on BLiMP, our Gaussian anomaly model is competitive with other transformer-based models on this task.",
"In Appendix A, we explore variations of the Gaussian anomaly model, such as varying the type of covariance matrix, Gaussian mixture models, and one-class SVMs (Scholkopf et al., 2000).",
"However, none of these variants offer a significant improvement over a single Gaussian model with full covariance matrix.",
"We notice that surprisal scores in the lower layers are sensitive to token frequency: higher frequency tokens produce embeddings close to the center of the Gaussian distribution, while lower frequency tokens are at the periphery.",
"The effect gradually diminishes towards the upper layers.",
"To quantify the sensitivity to frequency, we compute token-level surprisal scores for 5000 sentences from BNC that were not used in training.",
"We then compute the Pearson correlation between the surprisal score and log frequency for each token (Figure 3).",
"In all three models, there is a high correlation between the surprisal score and log frequency at the lower layers, which diminishes at the upper layers.",
"A small positive correlation persists until the last layer, except for XLNet, in which the correlation eventually disappears.",
"There does not appear to be any reports of this phenomenon in previous work.",
"For static word vectors, Gong et al. (2018) found that embeddings for low-frequency words lie in a different region of 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 7 8 9 10 11 12 Layer P ea r s on C o rr e l a t i on BERT RoBERTa XLNet Figure 3: Pearson correlation between token-level surprisal scores (Equation 4) and log frequency.",
"the embedding space than high-frequency words.",
"We find evidence that the same phenomenon occurs in contextual embeddings (Appendix B).",
"In this scenario, the Gaussian model fits the high-frequency region and assigns lower likelihoods to the low-frequency region, explaining the positive correlation at all layers; however, it is still unclear why the correlation diminishes at upper layers.",
"We turn to the question of whether LMs exhibit different behaviour when given inputs with different types of linguistic anomalies.",
"The task of partitioning linguistic anomalies into several distinct classes can be challenging.",
"Syntax and semantics have a high degree of overlap there is no widely accepted criterion for distinguishing between ungrammaticality and semantic anomaly (e.g., Abrusan (2019) gives a survey of current proposals), and Poulsen (2012) challenges this dichotomy entirely.",
"Similarly, Warren et al. (2015) noted that semantic anomalies depend somewhat on world knowledge.",
"Within a class, the anomalies are also heterogeneous (e.g., ungrammaticality may be due to violations of agreement, wh -movement, negative polarity item licensing, etc), which might each affect the LMs differently.",
"Thus, we define three classes of anomalies that do not attempt to cover all possible linguistic phenomena, but captures different levels of language processing while retaining internal uniformity:",
"sandwich ), or incorrect verb tense or aspect inflection ( *the boy eaten the sandwich ).",
"In each case, the sentence can be corrected by changing the inflectional form of one word.",
"2. Semantic anomaly : a violation of a selectional restriction, such as animacy ( #the house eats the sandwich ).",
"In these cases, the sentence can be corrected by replacing one of the verb's arguments with another one in the same word class that satisfies the verb's selectional restrictions.",
"3. Commonsense anomaly : sentence describes an situation that is atypical or implausible in the real world but is otherwise acceptable ( #the customer served the waitress ).",
"We use two sources of data for experiments on linguistic anomalies: synthetic sentences generated from templates, and materials from psycholinguistic studies.",
"Both have advantages and disadvantages synthetic data can be easily generated in large quantities, but the resulting sentences may be odd in unintended ways.",
"Psycholinguistic stimuli are designed to control for confounding factors (e.g., word frequency) and human-validated for acceptability, but are smaller (typically fewer than 100 sentence pairs).",
"We curate a set of 12 tasks from BLiMP and 7 psycholinguistic studies 1 .",
"Each sentence pair consists of a control and an anomalous sentence, so that all sentences within a task differ in a consistent manner.",
"Table 1 shows an example sentence pair from each task.",
"We summarize each dataset:",
"1. BLiMP (Warstadt et al., 2020): we use subject-verb and determiner-noun agreement tests as morphosyntactic anomaly tasks.",
"For simplicity, we only use the basic regular sentences, and exclude sentences involving irregular words or distractor items.",
"We also use the two argument structure tests involving animacy as a semantic anomaly task.",
"All three BLiMP tasks therefore have 2000 sentence pairs.",
"1 Several of these stimuli have been used in natural language processing research.",
"Chersoni et al. (2018) used the data from Pylkkanen and McElree (2007) and Warren et al. (2015) to probe word vectors for knowledge of selectional restrictions.",
"Ettinger (2020) used data from Federmeier and Kutas (1999) and Chow et al. (2016), which were referred to as CPRAG-102 and ROLE-88 respectively.",
"2. Osterhout and Nicol (1999): contains 90 sentence triplets containing a control, syntactic, and semantic anomaly.",
"Syntactic anomalies involve a modal verb followed by a verb in -ing form; semantic anomalies have a selectional restriction violation between the subject and verb.",
"There are also double anomalies (simultaneously syntactic and semantic) which we do not use.",
"3. Pylkkanen and McElree (2007): contains 70 sentence pairs where the verb is replaced in the anomalous sentence with one that requires an animate object, thus violating the selectional restriction.",
"In half the sentences, the verb is contained in an embedded clause.",
"4. Warren et al. (2015): contains 30 sentence triplets with a possible condition, a selectional restriction violation between the subject and verb, and an impossible condition where the subject cannot carry out the action, i.e., a commonsense anomaly.",
"5. Osterhout and Mobley (1995): we use data from experiment 2, containing 90 sentence pairs where the verb in the anomalous sentence is semantically inappropriate.",
"The experiment also tested gender agreement errors, but we do not include these stimuli.",
"6. Federmeier and Kutas (1999): contains 34 sentence pairs, where the final noun in each anomalous sentence is an inappropriate completion, but in the same semantic category as the expected completion.",
"7. Chow et al. (2016): contains 44 sentence pairs, where two of the nouns in the anomalous sentence are swapped to reverse their roles.",
"This is the only task in which the sentence pair differs by more than one token.",
"8. Urbach and Kutas (2010): contains 120 sentence pairs, where the anomalous sentence replaces a patient of the verb with an atypical one.",
"Let D = { ( s 1 , s 1 ) , , ( s n , s n ) } be a dataset of sentence pairs, where s i is a control sentence and s i is an anomalous sentence.",
"For each layer L , we define the surprisal gap as the mean difference of surprisal scores between the control and anomalous sentences, scaled by the standard deviation: surprisal gap L ( D ) = E { surprisal L ( s i ) surprisal L ( s i ) } ni =1 { surprisal L ( s i ) surprisal L ( s i ) } ni =1 (9) The surprisal gap is a scale-invariant measure of sensitivity to anomaly, similar to a signal-to-noise ratio.",
"While surprisal scores are unitless, the surprisal gap may be viewed as the number of standard deviations that anomalous sentences trigger surprisal above control sentences.",
"This is advantageous over accuracy scores, which treats the sentence pair as correct when the anomalous sentence has higher surprisal by any margin; this hard cutoff masks differences in the magnitude of surprisal.",
"The metric also allows for fair comparison of surprisal scores across datasets of vastly different sizes.",
"Figure 4 shows the surprisal gap for all 12 tasks, using the RoBERTa model; the results for BERT and XLNet are in the Appendix C. Next, we compare the performance of the Gaussian model with the masked language model (MLM).",
"We score each instance as correct if the masked probability of the correct word is higher than the anomalous word.",
"One limitation of the MLM approach is that it requires the sentence pair to be identical in all places except for one token, since the LMs do not support modeling joint probabilities over multiple tokens.",
"To ensure fair comparison between GM and MLM, we exclude instances where the differing token is out-of-vocabulary in any of the LMs (this excludes approximately 30% of instances).",
"For the Gaussian model, we compute accuracy using the best-performing layer for each model (Section 3.2).",
"The results are listed in Table",
"2. 5 Discussion 5.1 Anomaly type and surprisal Morphosyntactic anomalies generally appear earlier than semantic anomalies (Figure 4).",
"The surprisal gap plot exhibits different patterns depending on the type of linguistic anomaly: morphosyntactic anomalies produce high surprisal relatively early (layers 3-4), while semantic anomalies produce low surprisals until later (layers 9 and above).",
"Commonsense anomalies do not result in surprisals at any layer: the surprisal gap is near zero for all of the commonsense tasks.",
"The observed difference between morphosyntactic and semantic Commonsense Urbach and Kutas Commonsense Chow et al.",
"anomalies is consistent with previous work (Ten-ney et al. , 2019), which found that syntactic information appeared earlier in BERT than semantic information.",
"One should be careful and avoid drawing conclusions from only a few experiments.",
"A similar situation occurred in psycholinguistics research (Kutas et al., 2006): early results suggested that the N400 was triggered by semantic anomalies, while syntactic anomalies triggered the P600 a different type of ERP.",
"However, subsequent experiments found exceptions to this rule, and now it is believed that the N400 cannot be categorized by any standard dichotomy, like syntax versus semantics (Kutas and Federmeier, 2011).",
"In our case, Pylkkanen and McElree (2007) is an exception: the task is a semantic anomaly, but produces surprisals in early layers, similar to the morphosyntactic tasks.",
"Hence it is possible that the dichotomy is something other than syntax versus semantics; we leave to future work to determine more precisely what conditions trigger high surprisals in lower versus upper layers of LMs.",
"The masked language model (MLM) usually outperforms the Gaussian anomaly model (GM), but the difference is uneven.",
"MLM performs much better than GM on commonsense tasks, slightly better on semantic tasks, and about the same or slightly worse on morphosyntactic tasks.",
"It is not obvious why MLM should perform better than GM, but we note two subtle differences between the MLM and GM setups that may be contributing factors.",
"First, the GM method adds up the surprisal scores for the whole sequence, while MLM only considers the softmax distribution at one token.",
"Second, the input sequence for MLM always contains a [MASK] token, whereas GM takes the original unmasked sequences as input, so the representations are never identical between the two setups.",
"MLM generally outperforms GM, but it does not solve every task: all three LMs fail to perform above chance on the data from Warren et al. (2015).",
"This set of stimuli was designed so that both the control and impossible completions are not very likely or expected, which may have caused the difficulty for the LMs.",
"We excluded the task of Chow et al. (2016) for MLM because the control and anomalous sentences differed by more than one token 2 .",
"RoBERTa is the best-performing of the three LMs in both the GM and MLM settings: this is expected since it is trained with the most data and performs well on many natural language benchmarks.",
"Surprisingly, XLNet is ill-suited for this task and performs worse than BERT, despite having a similar model capacity and training data.",
"XL-2 Sentence pairs with multiple differing tokens are inconvenient for MLM to handle, but this is not a fundamental limitation.",
"For example, Salazar et al. (2020) proposed a modifi-cation to MLM to handle such cases: they compute a pseudolog-likelihood score for a sequence by replacing one token at a time with a [MASK] token, applying MLM to each masked sequence, and summing up the log likelihood scores.",
"Net (Appendix C) show some differences from RoBERTa: only morphosyntactic tasks produce out-of-domain embeddings in these two models, and not semantic or commonsense tasks.",
"Evidently, how LMs behave when presented with anomalous inputs is dependent on model architecture and training data size; we leave exploration of this phenomenon to future work.",
"We use Gaussian models to characterize out-of-domain embeddings at intermediate layers of Transformer language models.",
"The model requires a relatively small amount of in-domain data.",
"Our experiments reveal that out-of-domain points in lower layers correspond to low-frequency tokens, while grammatically anomalous inputs are out-of-domain in higher layers.",
"Furthermore, morphosyntactic anomalies are recognized as out-of-domain starting from lower layers compared to syntactic anomalies.",
"Commonsense anomalies do not generate out-of-domain embeddings at any layer, even when the LM has a preference for the correct cloze completion.",
"These results show that depending on the type of linguistic anomaly, LMs use different mechanisms to produce the output softmax distribution.",
"We thank Julian Salazar and our anonymous reviewers for their helpful suggestions.",
"YX is funded through an NSERC Discovery Grant, a SSHRC Insight Grant, and an Ontario ERA award.",
"FR is supported by a CIFAR Chair in Artificial Intelligence."
] | [
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Existing pre-trained transformer analysis works usually focus only on one or two model families at a time, overlooking the variability of the architecture and pre-training objectives.",
"In our work, we utilize the oLMpics benchmark and psycholinguistic probing datasets for a diverse set of 29 models including T5, BART, and ALBERT.",
"Additionally, we adapt the oLMpics zero-shot setup for autoregressive models and evaluate GPT networks of different sizes.",
"Our findings show that none of these models can resolve compositional questions in a zero-shot fashion, suggesting that this skill is not learnable using existing pre-training objectives.",
"Furthermore, we find that global model decisions such as architecture, directionality, size of the dataset, and pre-training objective are not predictive of a model's linguistic capabilities.",
"The code for this study is available on GitHub 1 .",
"After the initial success of transfer learning in natural language processing (Howard and Ruder, 2018; Peters et al., 2018), the number of pre-trained models in NLP has increased dramatically (Radford and Narasimhan, 2018; Devlin et al., 2018; Lewis et al., 2019; Liu et al., 2019b; Raffel et al., 2019; Lan et al., 2019; Dong et al., 2019).",
"However, there is a limited understanding of why certain models perform better than others and what linguistic capabilities they acquire through pre-training.",
"While a lot of work has been done to evaluate these models on general natural language understanding datasets (Wang et al., 2018, 2019; Lai et al., 2017), such datasets do not allow researchers to identify the specific linguistic capabilities of a model.",
"Furthermore the performance on these The first two authors made equal contribution to this work.",
"datasets results from a combination of pre-trained knowledge and task-specific information learned through fine-tuning.",
"Probing tasks (Talmor et al., 2019; Zagoury et al., 2021; McCoy et al., 2019; Goldberg, 2019) give a promising solution to this problem, as they evaluate specific capabilities of pre-trained models, and in many cases, these tasks are designed for zero-shot evaluation, which reveals the knowledge that models have actually learned purely through the upstream task.",
"Currently, most in-depth analysis studies focus on one or two model families.",
"Many analysis papers only probe BERT and similar models (Ettinger, 2020; Kobayashi et al., 2020; Gar Soler and Apidianaki, 2020; Ravichander et al., 2020; Zagoury et al., 2021; Kassner et al., 2020; Mohebbi et al., 2021; Clark et al., 2020; Liu et al., 2021).",
"Fortunately, this trend is changing and now we see more papers that probe models such as ALBERT , T5 or BART (Mosbach et al., 2020; Phang et al., 2021; Jiang et al., 2021).",
"However, only a small number of analysis papers have probed multiple (three or more) model families (Zhou et al., 2021; Ilharco et al., 2021).",
"In our work, we test 8 families of models on oLMpics tasks (Talmor et al., 2019) and 6 families on psycholinguistic tasks from Ettinger (2020).",
"These models differ in size, architecture, pretraining objective, dataset size, and have other small yet important differences.",
"Such a diverse set of models provides a broader view of what linguistic capabilities are affected by the change of any of these properties.",
"We also include several distilled models in our analysis.",
"We find that different models excel in different symbolic reasoning tasks, suggesting that slight differences related to optimization or masking strategy might be more important than the pre-training approach, dataset size, or architecture .",
"Furthermore, in contrast to Radford et al. (2019), we find that for oLMpics tasks, model size rarely correlates with the model 3180 performance.",
"In addition, we observe that all models fail on composition tasks when evaluated in a zero-shot fashion.",
"Pre-trained model analysis is a rapidly growing area in NLP today.",
"There exists a number of methods for analyzing internal representations of a model, including structured head and FCN pruning (Michel et al., 2019; Voita et al., 2019; Prasanna et al., 2020), residual connection and LayerNormal-ization analysis (Kovaleva et al., 2021; Kobayashi et al., 2021), and analyzing attention patterns (Clark et al., 2019; Kovaleva et al., 2019).",
"Compared to these methods, probing tasks (Con-neau et al., 2018; Tenney et al., 2019) provide a more direct way to evaluate what a model can and cannot accomplish.",
"While it is possible to probe embeddings or hidden representations directly (Tenney et al., 2019; Liu et al., 2019a), the adoption of pre-trained language models has made it possible to evaluate such models by framing probing tasks close to the original model objective (Rad-ford et al., 2019; Talmor et al., 2019; Ettinger, 2020; Goldberg, 2019).",
"However, when a research area moves this quickly, it can be hard to keep up with many new models.",
"Most of the existing research (Gar Soler and Apidianaki, 2020; Zagoury et al., 2021; Kassner et al., 2020) papers compare only one or two model families.",
"Even some of the most recent works only probe BERT or very similar models (Zagoury et al., 2021; Liu et al., 2021).",
"Only a small number of analysis papers have probed multiple (three or more) model families (Zhou et al., 2021; Ilharco et al., 2021).",
"In contrast to existing work, we perform a large-scale probing of 29 models across 8 different model families.",
"We apply the existing probing benchmarks, namely, oLMpics (Talmor et al., 2019) and psycholinguistic datasets (Ettinger, 2020), to models that differ in the pre-training objective, datasets, size, architecture, and directionality.",
"We use 8 different model families in this study.",
"All of them are based on the transformer architecture and pre-trained on general-domain texts, but this 2 GPTNEO is trained on a 800Gb dataset.",
"is where the similarities end.",
"We summarize their major differences in Table 1.",
"In this section, we discuss and highlight the details that distinguish models, from the major ones to the ones that might appear very minor.",
"BERT (Devlin et al., 2018) is pre-trained on Book Corpus and Wikipedia using a combination of Masked Language Modeling (MLM) and Next Sentence Prediction (NSP).",
"It uses GELU activations (Hendrycks and Gimpel, 2016) for fully-connected layers.",
"For the first 90% of the training iterations, the maximum length is 128, but then it is increased to 512.",
"RoBERTa (Liu et al., 2019b) is the most similar to BERT in this study; however, it differs from it in many small but important details: the pre-training dataset is considerably larger and includes Open-WebText (Gokaslan and Cohen, 2019), Stories (Trinh and Le, 2018), and CC-News.",
"RoBERTa does not use Next Sentence Prediction; applies masking dynamically; always trains with 512 max tokens; uses a smaller ADAM = 0 .",
"98 ; 8 times larger batch size than BERT ; and has a larger, byte-level BPE vocabulary (50K instead of 31K).",
"DistilBERT (Sanh et al., 2019) is a distilled version of BERT.",
"It has half the layers of BERT and is trained using soft targets produced by BERT .",
"ALBERT (Lan et al., 2019) shares parameters across transformer layers and uses an extra projection between the embedding and the first transformer layer.",
"It replaces NSP with the sentence-order prediction.",
"ALBERT uses n-gram masking and the LAMB (You et al., 2019) optimizer.",
"The training setup is similar to BERT , but it trains 90% of the time using the sequence length 512 and randomly reduces it in 10% of iterations.",
"Parameter sharing allows ALBERT to achieve performance similar to BERT with much fewer trainable parameters.",
"The smallest ALBERT model has 12M trainable parameters and the largest has 235M.",
"ALBERTv2 is a minor modification of ALBERT that was trained without dropout, for twice as many training steps with additional training data 3 .",
"GPT-2 (Radford et al., 2019) is a unidirectional transformer language model trained on the WebText dataset.",
"Unlike other models, it is a Pre-Norm transformer.",
"Similar to RoBERTa , GPT2 has a 50K vocabulary and a byte-level BPE but treats spaces as a separate symbol.",
"It also comes in multiple sizes from 124M parameters up to 2.8B parame-3 github.com/google-research/albert 3181 Model Parameters Pre-training Data Size Enc-Dec Autoregressive Tokenization Vocab.",
"ters.",
"There exist several popular reimplementations of this model, such as GPT-Neo (Black et al., 2021), which generally follow the original paper but differ in dataset (Gao et al., 2020), model, and training hyperparameters.",
"UniLM (Dong et al., 2019) utilizes several attention masks to control the access to context for each word token.",
"It uses a multitask objective that is modeled by applying different attention masks.",
"The mix of tasks includes masked language modeling, unidirectional language modeling, and sequence-to-sequence language modeling.",
"Additionally, it employs the NSP objective and is initialized using BERT model weights.",
"In optimization, it generally follows BERT but always uses 512 as the maximum sequence length.",
"BART (Lewis et al., 2019) is an encoder-decoder model that is trained on text infilling and sentence permutation tasks.",
"It is trained on the same dataset as RoBERTa .",
"Compared to BERT , BART does not use an additional projection when predicting word logits.",
"In optimization, it closely follows RoBERTa , but disables dropout for the final 10% of training.",
"T5 (Raffel et al., 2019) is also an encoder-decoder model.",
"It is trained using a text infilling task on the C4 dataset.",
"However, it only generates the text in place of the [MASK] token and not the full input sentence.",
"Architecturally, it is a Pre-Norm model and T5 LayerNorm does not use bias.",
"Output projection weights are tied with the input embedding matrix.",
"It uses 128 relative positional embeddings that are added at every layer.",
"Unlike most of the models in this study, it uses the ReLU activation.",
"The smallest T5 model used in this study has 233M parameters and the largest has 2.8B.",
"We have not evaluated the 11B T5 model due to hardware limitations.",
"Unlike the original T5 , T5v1 .",
"1 4 is trained on different data, does not tie logit layer with input embeddings, uses GEGLU activations (Shazeer, 2020) and no dropout.",
"It also slightly changes model shapes.",
"The oLMpics benchmark consists of eight tasks that test multiple specific skills, such as a model's ability to draw comparisons, understand negation, and perform simple linguistic composition tasks.",
"Table 2 shows examples for every task in oLMpics.",
"Zero-Shot vs. Multi-Shot A major advantage of the oLMpics tasks is that zero-shot evaluation can be performed for most tasks due to the task format.",
"Zero-shot evaluation eliminates the ambiguity of whether a model's knowledge is stored in its pre-trained representations or learned during fine-tuning.",
"However, a model may possess the necessary information but fail during zero-shot evaluation due to the wording of the task.",
"Therefore, multi-shot evaluation can also be informative, allowing the model to adapt to the input format and possibly learn task-specific features.",
"OLMpics tasks include training sets specifically for this reason, in order to separate the impact of fine-tuning from pre-training.",
"MC-MLM vs. MC-QA The oLMpics tasks are framed in one of two ways: MC-MLM (Multi-ple Choice-Masked Language Modeling) and MC-QA (Multiple Choice-Question Answering).",
"MC-MLM tasks are formulated as a masked language modeling task (Devlin et al., 2018), where the model needs to predict the word replaced by the MASK token.",
"An example of an Age Comparison sentence is A 41 year old is [MASK] a 42 year 4 huggingface.co/google/t5-v1_1-base 3182 Task Name Example Question Choices Age Comparison A 41 year old person age is [MASK] than a 42 year old person. younger, older Object Comparison The size of a nail is usually [MASK] than the size of a fork. smaller, larger Antonym Negation It was [MASK] a fracture, it was really a break. not, really Taxonomy Conjunction A ferry and a biplane are both a type of [MASK]. airplane, craft, boat Property Conjunction What is related to vertical and is related to honest? straight, trustworthy, steep Encyclopedic Composition When did the band where Alan Vega played first form? 1970, 1968, 1969 Hypernym Conjunction A basset and a tamarin are both a type of [MASK] primate, dog, mammal Multi-hop Composition When comparing a 21 year old, 15 year old, and 19 year old, the [MASK] is oldest. third, first, second Table 2: Examples of oLMpics questions, with the correct answer underlined. old.",
"A model's prediction is determined by the probabilities assigned to the [MASK] token, with younger being selected if its probability is higher than older, and older otherwise.",
"MC-MLM restricts the possible answers to single tokens.",
"Tasks with longer answers require MC-QA.",
"In this method, a new feedforward network maps the [CLS] token embedding to a single logit.",
"For prediction, answer choices are individually concatenated to the original question, forming a new sentence for each choice.",
"This set of sentences is input into the model, and the choice corresponding to the sentence with the largest logit is selected.",
"While the MC-QA method allows for longer choices, the added feedforward network must be trained; therefore, zero-shot evaluation is not possible.",
"Extending Beyond MLM The oLMpics MC-MLM method relies on the model giving probabilities of individual words in a bidirectional context.",
"However, models like GPT2 do not have access to the future context, which makes it impossible to directly predict the token in an example like A 41 year old is [MASK] than 42 year old.",
"For these models, we sum the log-probabilities of individual words to find the probability of the whole sentence.",
"We do this for every possible answer, e.g., a sequence with younger instead of [MASK] and older.",
"Then, we select the one with the highest total probability.",
"Extending BART and T5 is more straightforward because their objectives and architecture are very flexible.",
"For both of these models, we use the original oLMpics input format.",
"T5 has multiple [MASK]-tokens and we always use <extra_id_0> token in our evaluation.",
"The biggest difference is that BART produces the full sentence and we need to extract the probabilities for the masked words and T5 produces only the tokens in the place of [MASK].",
"Similar to oLMpics, the datasets used by Ettinger (2020) are framed as fill in the blank tasks.",
"Unlike oLMpics, the model always needs to predict only the last word, so both bidirectional and unidirectional models can be evaluated on these tasks directly.",
"The biggest distinction of this dataset is its source.",
"The datasets CPRAG-102 (Federmeier and Kutas, 1999), ROLE-88 (Chow et al., 2016), and NEG-136 (Fischler et al., 1983) come from the psycholinguistics and neuroscience studies and were originally evaluated on humans.",
"CPRAG-102 targets commonsense and pragmatic inference e.g. Justin put a second house on Park Place.",
"He and his sister often spent hours playing __ , Target: monopoly , other labels: chess, baseball .",
"ROLE-88 aims at evaluating event knowledge and semantic roles.",
"NEG-136 tests how well models understand the meaning of negation and consists of two subsets: simple (SIMP) and natural (NAT).",
"For example, SIMP: Salmon is a fish/dog versus Salmon is not a fish/dog.",
"NAT: Rockets and missiles are very fast/slow versus Rockets and missiles aren't very fast/slow .",
"Evaluation of this dataset is performed in two ways: affirmative statements and negative statements.",
"For affirmative ones, the model needs to complete a sentence like A robin is a with the expected answer bird .",
"For negative, A robin is not a should not be completed with a bird .",
"(Ettinger, 2020) finds that this type of error is very common in BERT , which suggests that the model cannot handle negation correctly.",
"Ettinger (2020) tests BERT models in two ways: using a pre-defined set of answers, similar to oLMpics MC-MLM, or computing top-k accuracy from the whole model vocabulary.",
"We adopt the same approach in this study.",
"We evaluate eight models families on the oLMpics (29 models in total) and six families on psycholinguistic data (17 models).",
"This extends the Talmor et al. (2019) results with six new model families and Ettinger (2020) with four.",
"Zero-shot evaluation It has been shown that language models can implicitly learn downstream tasks (Radford et al., 2019; Brown et al., 2020).",
"However, it is still not obvious what tasks are learnable in this manner without explicit supervision.",
"In our study, similar to Talmor et al. (2019), we find that none of the models can solve Multi-Hop Composition or Always-Never tasks substantially better than a majority baseline (see Table 4).",
"This holds true not only for masked language models but also for unidirectional language models such as GPT2 and text-infilling models such as T5 or BART .",
"Only small and base versions of T5v1 .",
"1 outperform the majority baseline on MultiHop Composition by a small margin.",
"Multi-shot evaluation Not surprisingly, fine-tuning models on oLMpics improves the scores across the board.",
"This is true even for the tasks on which zero-shot performance is extremely poor.",
"For example, while all models fail on Multi-hop Composition during zero-shot evaluation, most models can reach perfect or near-perfect accuracy on this task after fine-tuning.",
"However, AlwaysNever and Taxonomy Conjunction remain challenging for all models.",
"For the full multi-shot evaluation, see Table 7 in the Appendix.",
"To check how the size of a model affects the performance, we evaluated different versions of GPT2 , T5 , and ALBERT models on the oLMpics tasks ranging from 14M (smallest ALBERT ) to 2.8B (largest T5 ) parameters.",
"All of the models perform near-random on 3 out of the 6 tasks, suggesting that Multi-Hop Composition, Antonym Negation, and Always-Never are hard to learn via the (masked) language modeling objective.",
"On the rest of the tasks, we observe no clear improvement trend for GPT models based on the model size.",
"In most of the tasks, GPT large either performs on par or has higher accuracy than GPT xl while being twice as small.",
"We also compute Spearman correlation between model accuracy and model size for GPT2 , ALBERT , and T5 models.",
"5 For all GPT2 and ALBERT (v1 and v2) tests, the p-value is (cid:29) 0 .",
"05 , suggesting that there is no rank-correlation between model size and task performance.",
"However, in the case of T5 models, there is a strong (1.0) and significant correlation (p-value 10 6 ) for all tasks except Always-Never .",
"We account for multiple hypothesis testing using Bonferroni's method.",
"For Taxonomy Conjunction , the correlation is negative.",
"Contrary to the common knowledge, with rare exceptions (Section 4.1), we do not observe that parameter count, dataset size, model architecture or directionality are predictive of model performance on zero-shot oLMpics (Table 4).",
"RoBERTa large usually performs amongst the best models, while having a very similar architecture and objective to BERT large .",
"Reasonable explanations would be the dataset size, but this does not align with the BART large results.",
"Encoder-decoder architecture does seem not to be indicative of the performance either, as T5 large and BART large have vastly different results.",
"Psycholinguistic datasets (Table 5) demonstrate similar behaviour.",
"RoBERTa large is generally the stongest model followed by T5 xl .",
"We would like to note that these datasets have less than 100 examples and their statistical power (Card et al., 2020) is very small.",
"Our intuitions about the relative suitability of different model classes are based on their performance on standard benchmarks (Wang et al., 2018, 2019) and existing investigations of scaling laws (Rad-ford et al., 2019; Kaplan et al., 2020).",
"In contrast to this received wisdom, our experiments suggest that this does not in fact lead to better performance on specific linguistic skills.",
"Ettinger (2020) observed that BERT is not sensitive to negation in non-natural (SIMP) or less-natural cases.",
"In our experiments (Table 6), we find that the only model with zero accuracy outside of BERT is a distilled version of BERT itself.",
"Multiple models achieve non-zero accuracy 5 Note that sample size for each test is 4, so these results should be taken as anecdotal.",
"on NEG-SIMP (neg), but the numbers might be misleading.",
"For example, while ALBERTv1 xlarge has 27.8% accuracy on NEG-SIMP (neg), this accuracy is mainly caused by mistakes in language modeling while still being insensitive to negation (e.g., it predicts vegetable for both An ant is a and An ant is not a ).",
"Specifically, ALBERTv1 xlarge only changes its predictions in 5.5% cases.",
"However, unlike other models, RoBERTa large actually changes its predictions in 33% cases, suggesting that sensitivity to negation might be possible to learn via masked language modeling.",
"One drawback of datasets from Ettinger (2020) that we have noticed was the ambiguity of answers.",
"For 3185 example, many models predict words like this, that, it as the next word for Checkmate, Rosaline announced with glee.",
"She was getting to be really good at [MASK] instead of the word chess.",
"In fact, for T5 xl predictions, we found that 79.4% of predictions are semantically and grammatically plausible, while this model has only achieved 58.8% top-5 accuracy on the CPRAG-126 dataset (Table 5).",
"Another example would be I'm an animal like",
"Eeyore! the child exclaimed.",
"His mother wondered why he was pretending to be a [MASK] .",
"CPRAG expects the answer donkey, which assumes that the reader (or model) is familiar with the English names of Winnie-the-Pooh book characters.",
"6 4.6 Antonym Negation: Impact of prompt variation While there is clear evidence that models pretrained with the MLM objective have trouble with negation (Ettinger, 2020), no such evidence has been available for models trained autoregressively.",
"At the same time, a number of studies have shown that autoregressive models can be significantly improved with prompting.",
"Our question is whether we can make a language model (GPT-2) understand negation via an alternative wording of the task (prompt engineering).",
"We tested four different prompts for the Antonym Negation task.",
"Table 3 shows the patterns and the corresponding accuracies of GPT models.",
"All experiments use yes/no verbalizers.",
"While some prompts improve the oLMpics prompt results (up to +6%), this improvement is not consistent across models showing that even very similar models are sensitive to prompt variation in different ways.",
"Additionally, prompt #4 (Table",
"3) improves the smallest model, GPT2 base , so significantly that it outperforms the largest model by approximately 10%, demonstrating once again that parameter count is not a reliable predictor of the model performance.",
"For one oLMpics task, Age Comparison, we observe that models do not perform equally well on",
"all age ranges, similar to the findings of Talmor et al. (2019).",
"Figure 1 shows that with the exception of GPT2 base , all GPT2 variants perform well on 10-20 year olds and poorly on the 30-40 age group, with a significant drop in performance from 80% to 20%.",
"Generally, GPT2 seems to predict younger ages more accurately.",
"However, the smallest model, GPT2 base , exhibits a different trend than other models as age increases.",
"We find that model performance can change significantly on both oLMpics and psycholinguistic datasets if we add a period to the end of the sequence.",
"For example, BERT and DistilBERT achieve an accuracy of 3% without a period on CPRAG as compared to 52.9% when a",
"'. is appended.",
"We observe a similar trend on the ROLE and NEG datasets and for other models including RoBERTa , where the accuracy on CPRAG jumped from 47.1% to 70.1%.",
"For oLMpics, the change of performance is less dramatic, but still noticeable.",
"We observe that in 6% of cases (across all 3186 CPRAG-126 ROLE-88 NEG-136SIMP(Aff) NEG-136NAT(Aff) BERT base 52.9 27.3 100 43.8 BERT large 52.9 37.5 100 31.3 RoBERTa base 70.1 46.6 94.4 56.3 RoBERTa large 82.4 55.7 94.4 50 DistilBERT base 55.9 28.4 94.4 43.8 AlBERTv1 base 17.6 17.1 72.2 25.0 AlBERTv1 large 35.3 26.1 83.3 25 AlBERTv1 xlarge 41.2 34.1 55.5 18.8 AlBERTv1 xxlarge 82.4 53.4 72.2 50 AlBERTv2 base 41.4 26.1 33.3 31.1 AlBERTv2 large 47.1 29.5 83.3 37.5 AlBERTv2 xlarge 61.8 37.5 94.4 25 AlBERTv2 xxlarge 85.3 50 100 37.5 T5 small 20.6 9.1 44.4 18.8 T5 base 41.1 27.3 88.9 31.3 T5 large 50.0 36.4 94.4 43.8 T5 xl 58.8 44.3 83.3 62.5 Table 5: Zero-shot top-5 word prediction accuracy.",
"models and all tasks), model performance changes by more than 10 absolute percentage points if a full stop is added to the end of sentence.",
"Figure 2 shows the histogram of accuracy changes for oLMpics tasks.",
"In this work, we apply a large and diverse set of models to oLMpics and psycholinguistic tasks.",
"The variety of models allows us to investigate the performance of different architectures and pre-training methods on a variety of linguistic tasks.",
"Contrary to received wisdom, we find that parameter count within a given model family does not correlate with model performance on these tasks.",
"We find that none of the models, even the 2.8B-sized ones, can resolve Multi-Hop Composition and Always-Never tasks in a zero-shot manner, suggesting that the existing pre-training methods cannot learn such tasks.",
"Finally, we find that different models excel in different symbolic reasoning tasks, suggesting that slight differences related to optimization or masking strategy might be more important than the pre-training approach, dataset size, or architecture.",
"This work is funded in part by the NSF award number IIS-1844740."
] | [
"abstain",
"method",
"method",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"result",
"other"
] |
[
"Recently, relation classification has gained much success by exploiting deep neural networks.",
"In this paper, we propose a new model effectively combining Segment-level Attention-based Convolutional Neural Networks (SACNNs) and Dependency-based Recurrent Neural Networks (DepRNNs).",
"While SACNNs allow the model to selectively focus on the important information segment from the raw sequence, DepRNNs help to handle the long-distance relations from the shortest dependency path of the related entities.",
"Experiments on the SemEval-2010 Task 8 dataset show that our model is comparable to the state-of-the-art without using any external lexical features.",
"Relation classification (RC) is a fundamental task in Natural Language Processing (NLP) that aims to identify semantic relations between pairs of marked entities in given sentences (instances).",
"It has attracted much research effort as it plays a vital role in many NLP applications such as Information Extraction and Question Answering (Nguyen and Grishman, 2015).",
"Traditional approaches (Kambhatla, 2004; Zhang et al., 2006) usually rely heavily on hand-crafted features and lexical resources, or elaborately designed kernels, which are time-consuming and challenging to adapt to novel domains.",
"Recently, neural network (NN) models have dominated the work on RC since they can effectively learn meaningful hidden features without human intervention.",
"However, most previous NN models only exploit one of the following structures to represent relation instances: raw word sequences (Zhou et al., 2016; Wang et al., 2016) and dependency trees (Wen, 2017; Le et al., 2018).",
"While raw sequences can provide all the information of relation instances, they also add noise to the models from redundant information.",
"While dependency tree structures help the models focus on the concise information captured by the shortest dependency path (SDP) between two entities, they lose some supplementary context in the raw sequence.",
"It is clear that the raw sequence and SDP highly complement each other.",
"We, therefore, combine them to be more effective in determining the relation without losing any information.",
"While CNNs are able to learn short patterns (local features) (LeCun et al., 1995), RNNs have been effective in learning word sequence information (long-distance features) (Chung et al., 2014).",
"In this paper, we present a new model combining both CNNs and RNNs, exploiting the information from both the raw sequence and the SDP.",
"Our contributions are summarized as follows:",
"(a) We combine Entity Tag Feature (ETF) (Qin et al., 2016) and Tree-based Position Feature (TPF) (Yang et al., 2016) to improve the semantic information between the marked entities in the raw input sentences.",
"(b) We propose Segment-Level Attention-based Convolutional Neural Networks (SACNNs) which automatically pay special attention to the important text segments from the raw sentence for RC.",
"(c) We build Dependency-based Recurrent Neural Networks (DepRNNs) on the SDP to gain long-distance features.",
"Then, we combine the SACNN and the DepRNN to preserve the full relational information.",
"Our proposed model achieves new state-of-the-art results on SemEval-2010 Task 8, compared with other complex models.",
"RC plays a significant role in many NLP applications.",
"Recent work usually present the task from a supervised perspective.",
"Traditional supervised approaches can be divided into feature-based methods and kernel methods.",
"Feature-based methods focus on extracting and combining relevant features.",
"Rink and Harabagiu (2010) leveraged useful features to achieve the best performance on SemEval-2010 Task 8.",
"Meanwhile, kernel methods measure the structural similarity between two data samples, based on carefully designed kernels.",
"Wang (2008) combined convolutional kernel and syntactic features to gain benefits for relation extraction.",
"Nowadays, deep neural networks are widely utilized in RC.",
"Zeng et al. (2014) exploited a CNN to extract lexical and sentence features.",
"Qin et al. (2016) used ETF to specify target entities in input sentences and fed them to a CNN.",
"Vu et al. (2016) combined CNN and RNN to improve performance.",
"Some recent work leveraged SDP for RC.",
"Yang et al. (2016) proposed a position encoding CNN based on dependency parse trees, while Wen (2017) presented a model that learns representations from SDP, using both CNN and RNN.",
"Given a sentence S with an annotated pair of entities ( e 1 , e 2 ), we aim to identify the semantic relation between them.",
"Since the set of target relations is pre-defined, RC can be treated as a multi-class classification problem.",
"In this section, we describe our model in detail for resolving this problem.",
"In Figure 1, Entity Tag Feature (ETF) is firstly used to annotate two entities in each raw sentence.",
"Then, each word is represented by the concatenation of two parts: Word Embedding (WE) and Tree-based Position Features (TPFs).",
"The representation sequence is then fed to the SACNN.",
"Entity Information .",
"As the pairs of entities ( e 1 , e 2 ) are previously known, it is important to provide their information to the NNs.",
"Following the work of Qin et al. (2016), we also use ETF which involves adding four tokens: (cid:104) e 1 S (cid:105) , (cid:104) e 1 E (cid:105) , (cid:104) e 2 S (cid:105) and (cid:104) e 2 E (cid:105) to each input sentence.",
"Word Embedding .",
"Distributed representations of words in a vector space have helped learning algorithms to achieve better performance in NLP tasks (Mikolov et al., 2013).",
"Following most previous work, we also use pre-trained word embeddings to initialize input word tokens in our model.",
"Tree-based Position Features .",
"Yang et al. (2016) proposed TPFs for encoding relative distances of the current word to marked entities in dependency trees.",
"The relative distance refers to the length of the SDP between the current word and the target entity.",
"Then, each integer number is represented by a randomly initialized vector.",
"Since TPFs help the neural network focus on crucial words and phrases in a sentence (Yang et al., 2016), we therefore utilize TPFs in our model.",
"In Figure 1, TPF 1 and TPF 2 are relative distance features of each word to e 1 and e 2 , respectively.",
"For the four tokens: (cid:104) e 1 S (cid:105) , (cid:104) e 1 E (cid:105) , (cid:104) e 2 S (cid:105) and (cid:104) e 2 E (cid:105) , which do not belong to the dependency tree, we simply pad zero vectors for their TPFs.",
"SDP .",
"For the input of the DepRNN, we merely use the SDP between two marked entities from the original sentence as in Figure 1.",
"Each normal word in the SDP is represented by a vector from pre-trained word embeddings.",
"Meanwhile, following Le et al. (2018), we also consider dependency relations between words in the SDP and represent each dependency relation d i as a vector D i that is the concatenation of two vectors as follows: D i = Dtyp i Ddir i , where Dtyp is the undirected dependency vector ( i.e., nmod), and Ddir is the orientation of the dependency vector ( i.e., left-to-right or vice versa).",
"Both Dtyp and Ddir are initialized randomly.",
"The architecture of our model is illustrated in Figure 1.",
"The example sentence with two entities e 1 ( play ) and e 2 ( religion ) is labeled by the directional relation Message-Topic(e1;e2) .",
"While the raw sequence is passed to the SACNN, the SDP between e 1 and e 2 is used in the DepRNN.",
"Segment Attention-based CNN .",
"In the SACNN, each raw sentence is divided into three segments according to two entities: the left segment, the middle segment, and the right segment.",
"The repetitions of e 1 and e 2 in these segments help the semantic meaning of each segment to be more clear.",
"Intuitively, the middle segment is often more important to reflect the semantic relation.",
"Qin et al. (2016) only used the middle segment with a CNN for RC, while Vu et al. (2016) proposed an extended middle context to pay special attention to the middle part.",
"Although the middle segment is more significant than two remaining segments in many cases, Figure 1: Our model for relation classification.",
"it is not always true for all.",
"For example, in the sentence All other (cid:104) e 1 S (cid:105) blood (cid:104) e 1 E (cid:105) (cid:104) e 2 S (cid:105) products (cid:104) e 2 E (cid:105) are derived from whole blood. with the relation label Entity-Origin(e2;e1) , the right segment is more important to reflect the relation type.",
"Besides, the left and right segments might also provide the necessary information to RC.",
"We therefore proceed three segments independently through three separate CNNs, which allow the model to automatically identify segments containing important information.",
"Each CNN includes one convolutional layer and one max-pooling layer.",
"Let M be a matrix consisting of output vectors of three CNNs: M = [ m 1 , m 2 , m 3 ], where m i is the output of CNN i .",
"The final representation r 1 of the raw sentence generated by SACNN is formed by a weighted sum of output vectors in M : z i = tanh ( m i ) , i = exp ( w T z i + b ) (cid:80) 3 i =1 exp ( w T z i + b ) , r 1 = 3 (cid:88) i =1 i m i , where w is a weight vector, w T is its transformation, and b is a bias parameter.",
"Dependency-based RNN .",
"While SACNN can learn local features, it cannot handle long-distance dependency between two entities.",
"This disadvantage causes difficulty in correctly assigning subject and object roles of two entities when capturing the directional relation.",
"Meanwhile, RNN could tackle the problem of long-distance pattern learning (Zhang and Wang, 2015).",
"Besides, the SDP naturally offers the relative positions of subjects and objects through the path directions (Xu et al., 2015).",
"We, therefore, exploit SDP based on RNN to gain the information in the directional relation.",
"An shown in Figure 1, we use Bidirectional Long Short-Term Memory (BLSTM) on the SDP between two entities.",
"Due to its ability to capture long term memory, the BLSTM accumulates increasingly richer information as it goes through the SDP from both two forward and backward directions (Palangi et al., 2016).",
"When it reaches the last two words, the last two hidden states are expected to provide the full semantic meaning of the whole SDP.",
"Additionally, since the length of the SDP is often not so long, we concatenate two output vectors of the last two hidden states as the final representation r 2 of the SDP by DepRNN.",
"Combination of SACNN and DepRNN .",
"Finally, we combine both SACNN and DepRNN models to exploit fully their own distinct advantages.",
"While SACNN can focus on important segments and gain local features, DepRNN helps to handle long-distance dependency between two entities based on the SDP as well as provide subject and object roles of two entities for the directional relation.",
"Therefore, the final representation r of the relation instance is concatenated by two output vectors ( r 1 , r 2 ) of SACNN and DepRNN, which is then fed to a softmax classifier.",
"We evaluate our model on the SemEval2010 Task 8 which contains 8 , 000 training sentences and 2 , 717 test sentences, with 19 relations ( 9 directed relations and an undirected Other class).",
"Therefore, the relation classification task is treated as a multi-class classification problem.",
"Following previous work, the official macro-averaged F1-score, which excludes the Other relation, is used for evaluation.",
"We randomly held out 10 % of the training set for validation.",
"The Stanford Parser is also used to convert sentences to dependency trees.",
"For word embeddings, we use the 300 dimensional embeddings of Komninos and Man-andhar (2016).",
"In this work, we do not focus on comparing the effectiveness of the different pre-SACNN F1 WE, ETF 83.9 WE, TPF 84.5 WE, ETF, TPF 85.1 Table 1: Comparison of different features in SACNN.",
"trained embedding sets.",
"The above pre-trained embedding set is selected since it embeds dependency context to provide valuable syntactic information.",
"Four tokens: (cid:104) e 1 S (cid:105) , (cid:104) e 1 E (cid:105) , (cid:104) e 2 S (cid:105) , (cid:104) e 2 E (cid:105) and out-of-vocabulary words are initialized by sampling from a uniform distribution (Kim, 2014).",
"TPF is 15 -dimensional and initialized randomly.",
"Thus, the representation of each word has a dimensionality of 330 in the raw sentence.",
"Hyper-parameters in our model are as follows: 100 filters for each window size [ 3 , 4 , 5 ] and ReLU as the activation function for each CNN in SACNN.",
"In DepRNN, the dimension of each token is 300 , the tanh activation function is applied to the last two hidden states, the dimension of each hidden state vector is 150 .",
"Other parameters include: L2 regularization with a weight of 10 4 , a mini-batch size of 64 , a dropout rate at the final layer p = 0 .",
"5 before a softmax classifier.",
"Impact of SACNN and DepRNN .",
"We consider the performance of each model by feeding separately their output vector to a softmax classifier.",
"In Table 1, we see the effect of different features to SACNN's performance.",
"Combining ETF and TPF significantly enhances the F 1 score by 0 .",
"6 %.",
"It proves that ETF and TPF complement each other to more fully provide information about the marked entities and important words to SACNN.",
"We also examine the segment-level attention mechanism of SACNN.",
"In Table 2, with the same input features (WE, ETF, TPF), the segment-level attention mechanism makes a great contribution by increasing the F 1 score by 1 %.",
"To check the effect of combining SACNN and DepRNN, in Table 3, we compare the performance of each model to our combined model.",
"First, Model F1 DepRNN 83.8 SACNN 85.1 Combined 85.8 Table 3: Evaluation of our combined model.",
"the SACNN's performance is superior to the DepRNN.",
"One possible reason is that while SACNN selectively focuses on the important segments as well as gains local features from the raw sentences, DepRNN based on the SDP, which is short in the SemEval2010 Task 8, can only provide effectively the entities role.",
"Then, by combining SACNN and DepRNN, our model can exploit the fully necessary information and achieve the best performance.",
"Comparisons with the State of the Art .",
"We compare our model to some recent work on RC in Table 4.",
"Most previous work exploited some external lexical features (WordNet, NER) and combine NNs to improve the performance (Yang et al., 2016; Wang et al., 2017).",
"Wang et al. (2017) and Wen (2017) proposed complex structures for integrating the CNN and the LSTM, and achieved an F 1 of 84 .",
"7 % and 85 .",
"1 % respectively.",
"Zhang et al. (2018) combined CNN and BLSTM, and reached an F 1 of 83 .",
"7 % using only WE, PF features.",
"Without using any external lexical resources, our model achieves an F 1 of 85 .",
"8 %, showing that combining SACNN and DepRNN is very effective, since SACNN helps to selectively focus on the important segments and gains local features, DepRNN provides the role information of subject and object of two entities in addressing the relation directionality.",
"Comparing with some recent work, our model obtains a notable performance.",
"This work presents a new model that combines the SACNN and the DepRNN for RC.",
"Combining ETF and TPF provides entity and semantic information of the input sentences to the model effectively.",
"We also propose the SACNN which automatically focus on the essential segments and gains local features.",
"Besides, the DepRNN helps to exploit long-distance dependency between two entities and their roles.",
"Finally, combining the SACNN and the DepRNN brings the best performance since they highly complement each other.",
"Our model achieved a notable performance on the SemEval2010 Task 8 without using any external lexical resources.",
"This work was partly supported by JST CREST Grant Number JPMJCR1513, Japan.",
"We are grateful to the members of the Computational Linguistics Laboratory, NAIST and the anonymous reviewers for their insightful comments."
] | [
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"objective",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"other",
"other"
] |
[
"Neural dialogue generation models trained with the one-hot target distribution suffer from the over-confidence issue, which leads to poor generation diversity as widely reported in the literature.",
"Although existing approaches such as label smoothing can alleviate this issue, they fail to adapt to diverse dialog contexts.",
"In this paper, we propose an Ada ptive Label Smoothing ( AdaLabel ) approach that can adaptively estimate a target label distribution at each time step for different contexts.",
"The maximum probability in the predicted distribution is used to modify the soft target distribution produced by a novel light-weight bi-directional decoder module.",
"The resulting target distribution is aware of both previous and future contexts and is adjusted to avoid over-training the dialogue model.",
"Our model can be trained in an end-to-end manner.",
"Extensive experiments on two benchmark datasets show that our approach outperforms various competitive baselines in producing diverse responses.",
"The success of neural models has greatly advanced the research of dialog generation (Huang et al., 2020; Wang et al., 2020; Zhang et al., 2020).",
"However, most of these models suffer from a low-diversity issue where models tend to generate bland and generic responses such as I don't know or I'm OK (Li et al., 2016).",
"Although various approaches have been proposed to tackle this issue (Li et al., 2016; Zhao et al., 2017; Du et al., 2018; Zhou et al., 2018; Welleck et al., 2020; Zheng et al., 2020b), there are still remarkable gaps between responses generated by neural models and those from humans (Holtzman et al., 2020).",
"Further, some existing methods may even harm the fluency or coherence when improving the diversity of generated Equal contribution Corresponding Author: [email protected] So, what exactly do you do around here ?",
"responses.",
"(Ippolito et al., 2019; Massarelli et al., 2020; Zheng et al., 2020a).",
"Recently, Jiang and de Rijke (2018); Jiang et al. (2019) show that there is a strong connection between the low-diversity problem and the over-confidence issue.",
"i.e., over-confident dialogue models tend to produce low-diversity responses.",
"One of the reasons can be attributed to the supervision target.",
"Specifically, training a dialogue generation model with the Maximum Likelihood Estimation (MLE) objective under the hard target (i.e., one-hot distribution as ground truth) makes the model favor high-frequency tokens and produce over-confident probability estimation (Gowda and May, 2020), which ultimately leads to poor calibration (Mukhoti et al., 2020), and thus low diversity (Jiang et al., 2019).",
"Hinton et al. (2015) and Yang et al. (2018) suggest that the ideal training target should be a soft target that assigns probability mass on multiple valid candidates (see Figure 1).",
"With such a soft target, the over-confidence issue can be alleviated (M uller et al., 2019), and thus the diversity of the output responses can be improved.",
"Unfortunately, the ideal soft target is challenging to obtain.",
"Early works try to tackle this issue using label smoothing (Szegedy et al., 2016), i.e., a small probability is uniformly assigned to nontarget words.",
"However, the target distribution constructed in this way is far from ideal: First , the probability of the target word is chosen manually and fixed, which cannot adapt to different contexts.",
"However, as Holtzman et al. (2020) demonstrated, human text distribution exhibits remarkable fluctu-ations in the per-token perplexity.",
"We argue that different target probabilities should be used for different contexts.",
"Second , the uniform assignment of the probability mass on non-target words ignores the semantic relationship between the context and each word.",
"Ideally, a word should receive more probability mass if it is more relevant to the context.",
"For the example shown in Figure 1, word fun is more likely to appear behind the context I make the robots seem more than word bank .",
"To address the above issue, we propose an Ada ptive Label smoothing (AdaLabel) method that can dynamically estimate a soft target distribution at each time step for different contexts.",
"Specifically, for each target word y t in the training data, the probability distribution predicted by the current model is first obtained.",
"The maximum probability p max in this distribution measures the confidence of the current prediction, i.e., a higher p max means higher confidence for the current prediction.",
"To avoid over-confidence, we use p max as the supervision signal for the target word y t in the training process so that the model will not be optimized towards y t when it correctly predicts y t .",
"A word-level factor is also introduced to facilitate the learning of low-frequency words.",
"Moreover, we introduce a novel auxiliary decoder module D a to produce the supervision signals for these non-target words in each training step.",
"D a only contains one transformer block, and it is optimized to predict words based on bi-directional contexts.",
"A novel Target-Mask attention scheme is devised to prevent D a from seeing the target word in the training process.",
"This scheme also enables parallel training and inference of D a .",
"We perform extensive experiments on two benchmark datasets: DailyDialog and OpenSubtitles.",
"Our method outperforms various competitive baselines and significantly improves the diversity of generated responses while ensuring fluency and coherency.",
"Our major contributions are summarized: 1. We propose AdaLabel, a method that can produce a soft target distribution considering the current context and the model's confidence.",
"Specifically, AdaLabel ensures that the dialogue model will not be optimized toward the target word y t if y t has been correctly predicted.",
"This prevents our model from being over-confident.",
"2. We introduce a light-weight bi-directional decoder that can produce context-aware supervision signals for non-target words.",
"A novel Target-Mask attention scheme is devised to facilitate the parallel training and inference of this decoder.",
"3. Extensive experiments on two benchmark dialogue datasets with both automatic and human evaluation results show that our method helps to alleviate the model over-confident issue and significantly improves the model's diversity.",
"Diversity Promotion: Existing approaches for solving the low diversity issue of neural dialogue models generally involve two categories:",
"The first category is training-based, where new training objectives are designed (Li et al., 2016; Zhang et al., 2018; Gao et al., 2019) or latent variables are introduced (Zhao et al., 2017; Zhou et al., 2018) in the dialogue model.",
"Some methods also try to refine the training target used in the MLE loss (Choi et al., 2020; Jiang et al., 2019; Li et al., 2019), or directly penalize the trivial responses with auxiliary loss terms (Welleck et al., 2020; Li et al., 2020).",
"Unlike these existing approaches, our method tries to adaptively adjust the training target by utilizing the current predictions.",
"The second category is decoding-based, in which different heuristic decoding rules are designed (Holtzman et al., 2020; Kulikov et al., 2019).",
"Note that these decoding techniques are independent of the model setting, and our method can be used in combination with these techniques.",
"Confidence Calibration: Modern deep neural networks suffer from the over-confidence issue (Guo et al., 2017; Kumar and Sarawagi, 2019), and various remedies are proposed (Pereyra et al., 2017; Mukhoti et al., 2020; Lin et al., 2017).",
"Following the work of Jiang and de Rijke (2018); Jiang et al. (2019), our method is proposed to tackle the over-confidence issue to improve the diversity of the generated responses.",
"However, different from existing approaches, our method enables more flexible controls over the target distribution.",
"Knowledge Distillation: Another important technique similar to our work is knowledge distilla-Encoder Decoder Context Auxiliary Decoder (cid:3028) Training Response (cid:3400)(cid:4666)1(cid:3398)(cid:4667) (cid:3400) (cid:2869) (cid:2869) (cid:2870) (cid:2870) (cid:2871) (cid:3021) (cid:4670)(cid:4671) (cid:2871) (cid:2872) (cid:2869) (cid:2869) (cid:2870) (cid:2870) (cid:2871) (cid:4666) (cid:2871) (cid:4667) Auxiliary Distribution (cid:4666) (cid:2871) (cid:4667) Hard Target (cid:4666) (cid:2871) (cid:4667) Adaptive Soft Target (cid:4593) (cid:4666) (cid:4593) ,(cid:4667) Partial Response Predicted Distribution (cid:3040)(cid:3028)(cid:3051) (cid:4666) (cid:2871) (cid:4667) Figure 2: Overview of constructing the adaptive soft target q (cid:48) using AdaLabel: The maximum probability p max in the predicted distribution p is used to obtain an adaption factor (cid:15) , which is further used to combine the hard target q and the auxiliary distribution v to obtain q (cid:48) .",
"The most related work comparing to ours is the C-MLM approach (Chen et al., 2020), in which a BERT model is fine-tuned to be a teacher.",
"Our approach and C-MLM's primary difference is that our auxiliary decoder D a is a one layer module that is jointly trained with the dialogue model.",
"However, the BERT teacher in C-MLM contains much more parameters, and it is trained using an expensive pre-trained and then fine-tuned process.",
"Moreover, the target-masked attention scheme in D a enables parallel inferences of v for each training sequence Y .",
"In contrast, multiple independent forward passes are required for the BERT teacher.",
"The goal of generative dialogue modeling is to learn a conditional probability distribution p ( Y | X ) , where X is the dialogue context, Y = y 1 , ..., y T is a response word sequence, and y i V is a word from the vocabulary V .",
"In an auto-regressive manner, p ( Y | X ) is factorized as (cid:81) t p ( y t | y <t , X ) .",
"For each target word y t in the training sequence Y , a conventional MLE training approach try to optimize the following cross entropy loss: L ( q , p ) = (cid:88) w k V q k log [ p ( w k | y <t , X )] , (1) where q is a one-hot distribution (i.e., a hard target) that assigns a probability of 1 for the target word y t and 0 otherwise, i.e., q k = 1 only when w k = y t .",
"For simplicity of notation, we abbreviate the dependency of y t in the notation of each distribution in our paper, i.e., different target word y t in Y corresponds to different values of q and p .",
"where [0 , 1] is an adaption factor, and v is an auxiliary distribution vector that depends on the current time step.",
"(see Figure 2 for an overview).",
"In this study, we constrain v to assign zero probability for the target word y t and non-zero probabilities for these non-target words V (cid:54) = y t = { y i | y i V , y i (cid:54) = y t } .",
"This constraint allows us to explicitly control the supervisions assigned to y t .",
"Specifically, the first term q and the second term (1 ) v in Eq.",
"2 respectively determines how much probability q (cid:48) assigns to y t and V (cid:54) = y t .",
"This setting differs from conventional knowledge distillation (Kim and Rush, 2016) because it facilitates more flexible controls over q (cid:48) , so that we can use the factor to determine the supervision signal provided for the target word y t .",
"The following sections detail how to compute and v .",
"We control the probability of the target word y t in p (cid:48) by manipulating the adaption factor in Eq.",
"2. Specifically, for a training dialogue pair (cid:104) X, Y (cid:105) and each target word y t Y , the current distribution p ( | y <t , X ) is first calculated, and the maximum probability in this distribution is obtained: p max = max w k V p ( w k | y <t , X ) .",
"where serves as a lower-bound of (i.e., ).",
"The basic intuition behind Eq.",
"4 is to set = p max when p max is reasonably large.",
"This design prevents our model from receiving supervisions sharper than p max , when the current prediction is confidence enough.",
"Further, to ensure that the target word y t always receives the largest probability in q (cid:48) , i.e., to ensure > (1 ) max ( v ) (see Eq. 2), in which max ( v ) is the maximum probabilities for non-target words V (cid:54) = y t , we have to enforce > max ( v ) 1+ max ( v ) .",
"Thus we propose to calculate the lower-bound of as: = max ( v ) 1 + max ( v ) + , (5) where > 0 is a hyper-parameter that controls the margin between the probability of the target word and non-target words in p (cid:48) .",
"To facilitate faster converge and better learning of low-probability words, an empirical factor [0 , 1] is further introduced to adjust the calculation of on the basis of Eq.",
"4: = 1 (1 max ( p max , )) , (6) where is calculated as the relative ratio to p max : = (cid:20) p ( y t | y <t , X ) p max (cid:21) 2 , (7) where p ( y t | y <t , X ) is the probability for the target word y t .",
"Note that Eq.",
"6 and Eq.",
"4 is equivalent if = 1 .",
"Intuitively, accelerates the training of low-frequency words because if y t is of low-frequency in the corpus, then y t is usually under-trained and thus p ( y t | y <t , X ) is generally small.",
"This leads to a small and thus increases the probability for y t in p (cid:48) .",
"Note that , and are all time-step specific variables, whereas is a fixed hyper-parameter.",
"This allows the values adapt to dynamic contexts.",
"In our experiments, Eq.",
"6 is used to calculate .",
"The auxiliary distribution v in Eq.",
"2 is calculated using an auxiliary decoder D a , which is a single-layer transformer-based decoder that is jointly optimized with the generation model.",
"Figure 3 shows the structure of D a , in which a novel target-masked (cid:2869) (cid:2869) (cid:2870) (cid:2870) (cid:2871) [] (cid:2871) (cid:2872) , (cid:2872) Target-Masked Attention",
"attention scheme is devised to mask each target word y t in the self attention module of the decoder when calculating the corresponding v (see Figure 3b and 3c).",
"In this way, bi-directional contexts can be utilized when predicting the auxiliary distribution v for y t .",
"Moreover, it is important to use only one decoder layer in D a because stacking multiple layers in D a leaks the information of y t to v .",
"Note that using one layer in D a does not necessarily downgrade its performance (Kasai et al., 2021).",
"Our experiment results in Section 5.1 indicate that with the help of bi-directional contexts, the accuracy of D a largely outperforms the unidirectional dialogue decoder that is much deeper than D a .",
"Moreover, for a training response Y , the structure of D a enables us infer the auxiliary distribution in parallel for all the target words in Y within a single forward pass.",
"This differs from the BERT teacher used by Chen et al. (2020), in which multiple independent forward passes are needed to get the teacher distributions for all the words in Y .",
"When training D a , the following standard MLE loss is optimized for each target word y t : L ( q , v ) = |V| (cid:88) k =1 q k log v k , (8) in which the notation of q k follows Eq.",
"The outputs of D a are used as the logits to infer v to be further used in Eq.",
"2. Specifically, the logit of the target word y t is masked to before Softmax to ensure y t always receives zero probability in v .",
"Moreover, we also follow the approach used by Tang et al. (2020) to truncate the head and tail of the remaining logits before inferring v in Eq.",
"2, i.e., all the logits are ranked in a descending order and only the logits ranked from n to m are kept while the rest logits are masked to .",
"This masks the head and tail probabilities in v to zero.",
"We argue that truncating the tail probabilities of v filters noises, and truncating the head probabilities of v encourages the dialogue model to focus more on low-probability words.",
"In our experiments, we set n = 2 and m = 500 .",
"An extensive hyper-parameter search indicates that our method is not sensitive to the value of n and m .",
"There are two major differences between our auxiliary decoder D a and the teacher model used in conventional knowledge distillation approaches: First, conventional teacher models usually carry more parameters than their students, whereas D a is rather light-weight.",
"Second, conventional teacher models are typically pre-trained before being utilized in the distillation process, whereas D a is trained jointly with our dialogue model.",
"We use two benchmark datasets for open-domain dialogue generation: DailyDialog (Li et al., 2017) is a high-quality multi-turn dialogue dataset that is collected from daily conversations.",
"OpenSubtitles 1 contains dialogues collected from movie subtitles.",
"Moreover, we follow Li et al. (2016) and Jiang et al. (2019) to focus on short conversations, i.e., dialogues with posts or responses longer than 100 tokens are removed.",
"See Table 1 for more details.",
"The backbone of our model is the transformer-based sequence to sequence model (Vaswani et al., 2017), and most hyper-parameters follow Cai et al. (2020).",
"Specifically, the encoder and decoder each contains 6 layers.",
"Each layer has 8 attention heads, and the hidden size is set to 512.",
"The auxiliary decoder D a follows the same hyper-parameter setting as the dialogue decoder, but it only contains one layer.",
"The WordPiece tokenizer provided by 1 http://opus.nlpl.eu/OpenSubtitles.php BERT (Devlin et al., 2019) is used, and the Adam optimizer (Kingma and Ba, 2015) is employed to train our model from random initializations with a learning rate of 1e-4.",
"in Eq.",
"5 is set to 0.2 for all datasets.",
"See Appendix A for more details.",
"2 4.3 Baselines We compared our method with two groups of baselines that try to tackle the over-confidence issue.",
"The first group modifies the training target used to compute the loss function: 1) LS (Szegedy et al., 2016): uses the label smoothing approach to construct a target distribution by adding the one-hot target and a uniform distribution; 2) FL (Lin et al., 2017): uses the focal loss to down-weigh well-classified tokens in each time step.",
"3) FACE (Jiang et al., 2019): uses the frequency-aware cross-entropy loss to balance per-token training losses.",
"Specifically, relative low losses are assigned to high-frequency words to explicitly tackle the over-confidence issue.",
"We used the best performing Pre-weigh version in our experiments.",
"4) F 2 (Choi et al., 2020): factorizes the target distribution based on the token frequencies.",
"The second group of baselines add some penalty term to the standard MLE loss: 5) CP (Pereyra et al., 2017): a confidence penalty term is added to regularize the entropy of the model, so that over-confident predictions are penalized; 6) UL (Welleck et al., 2020): an unlikelihood loss term is added to penalize the frequently generated words.",
"7) NL (He and Glass, 2020): works similarly with baseline UL except a negative loss term is used instead of the unlikelihood loss term.",
"8) D2GPo (Li et al., 2019): augments the MLE loss with a data-dependent gaussian prior objective to assign different losses for different non-target words.",
"We also compared to: 9) CE : a vanilla Seq2Seq model trained with the cross-entropy loss.",
"For fair comparisons, the C-MLM model proposed by Chen et al. (2020) is not used as our baseline since the BERT teacher in C-MLM requires a large amount of extra data to pre-train.",
"Nevertheless, AdaLabel still surpasses C-MLM on various metrics (see Appendix F for more analysis).",
"All our baselines are adapted from the authors' official codes with the same backbone architecture and hyper-parameters as our model (see details in Appendix B).",
"Following the original setting, a train-2 Our code is available at: https://github.com/ lemon234071/AdaLabel Model DailyDialog OpenSubtitles Dist-1, 2 Ent-1, 2 LF BLEU-2,3,4 Dist-1, 2 Ent-1, 2 LF BLEU-2,3,4 CE 1.67 9.43 4.53 6.59 2.99 7.56 4.38 2.61 2.55 9.87 4.13 5.58 0.84 7.60 4.30 2.57 LS 1.48 8.78 4.48 6.55 2.44 7.98 4.68 2.86 2.77 13.08 4.45 6.57 0.51 8.91 5.57 3.84 FL 2.38 13.42 4.7 7.04 5.05 9.74 6.12 4.11 3.19 13.16 4.42 6.50 1.04 8.06 4.79 3.08 FACE 1.62 11.04 4.96 7.27 4.11 8.78 5.06 3.06 3.31 14.06 4.77 7.05 1.33 7.69 4.40 2.70 F 2 1.40 7.91 4.35 6.28 2.32 7.78 4.45 2.60 2.89 11.40 4.24 6.14 0.99 7.52 4.30 2.62 CP 2.35 12.91 4.64 6.89 4.07 9.06 5.68 3.79 3.11 12.72 4.36 6.35 0.98 8.06 4.82 3.12 UL 2.35 12.99 4.68 6.98 4.96 10.83 6.87 4.61 2.84 11.64 4.31 6.32 0.76 7.73 4.59 2.96 NL 1.66 9.18 4.47 6.58 4.30 9.83 5.83 3.60 3.24 12.98 4.42 6.49 1.08 7.56 4.38 2.71 D2GPo 1.26 8.06 4.43 6.48 2.20 8.30 4.82 2.93 2.07 11.01 4.32 6.36 0.19 8.41 5.08 3.35 AdaLabel 3.96 23.53 5.17 8.00 8.49 17.42 13.38 11.01 4.78 22.88 4.96 7.66 1.47 9.80 6.48 4.75 Human 6.59 37.74 5.67 8.91 13.7 N/A N/A N/A 8.62 43.16 5.89 9.36 4.75 N/A N/A N/A Table 2: Automatic evaluation results (%).",
"and-refine strategy is used in baseline 3, 6, and 7, i.e., these baselines are refined based on CE .",
"We follow the setting of Jiang et al. (2019) to use deterministic decoding scheme (particularly, greedy decoding) for our model and all baselines.",
"Note that our method can be adapted to other decoding schemes such as beam-search or top-K sampling.",
"See Appendix C for more detailed analysis.",
"Metrics: We first used automatic metrics to evaluate our method: 1) Distinct ( Dist ) (Li et al., 2016) calculates the proportion of unique n-grams (n=1, 2) in the generated responses, which is widely used to measure the response diversity.",
"2) Entropy ( Ent ) (Zhang et al., 2018) evaluates how evenly the empirical n-gram (n=1, 2) distribution is.",
"Higher sores mean more diverse of the response.",
"3) Low-Frequency Token Ratio ( LF ) (Li et al., 2019) further measures the model diversity by counting the ratio of low-frequency words in the generated responses.",
"We chose words with a frequency less than 100 in each corpus as low-frequency words.",
"Over-confident models tend to omit low-frequency words (i.e., get low LF scores) and yield less diversified responses.",
"4) BLEU (Papineni et al., 2002) measures n-gram (n=2, 3, 4) overlap between the generated responses and references.",
"Results: As shown in Table 2, our method AdaLabel outperforms all the baselines by large margins on all the datasets.",
"We can further observe that: 1) AdaLabel achieves the best diversity scores (Dist-1,2, Ent-1,2, and LF).",
"This indicates that our method yields better training targets that help to produce more diverse responses; 2).",
"The models that explicitly tackle the over-confidence issue (i.e., AdaLabel and FACE) generally outperform other baselines in diversity-related metrics.",
"For example, FACE obtains the second-best diversity scores (i.e., Dist, Ent, and LF) on the OpenSubtitles dataset.",
"This verifies our motivation that alleviating the over-confidence issue helps to produce more diverse responses.",
"Note that our method also outperforms all the baselines using the stochastic decoding scheme.",
"Please refer to Appendix C for more details.",
"Metrics: Pairwise manual evaluations are conducted to further validate our method.",
"Specifi-cally, for a given dialogue post, our model's response is paired with the one from a baseline.",
"Three individual annotators were employed to rank each response pair from three aspects: 1) Fluency ( Flu. ): which response is more fluent; 2) Coherency ( Coh. ): which response is more coherent to the context; 3) Informativeness ( Info. ): which response contains more informative content.",
"We also asked the annotator to choose an overall preferred response ( Pref. ).",
"Ties were allowed.",
"Results: 200 posts were randomly sampled from each of these two datasets, respectively, and totally 3.6K response pairs were generated.",
"The inter-rater annotation agreement was measured using Fleiss's kappa (Fleiss, 1971).",
"Particularly, the value on DailyDialog, OpenSubtitles dataset was 0.59 and 0.55, respectively, indicating moderate agreement.",
"As shown in Table 3, AdaLabel outperforms all the baselines on the informativeness measure.",
"This means that our method can respond with more informative content.",
"We can further observe that: 1).",
"All models achieve competitive fluency because it is easy for neural models to produce flu-ent responses by yielding trivial responses like I Comparison DailyDialog OpenSubtitles Pref. Flu. Coh. Info. Pref. Flu. Coh. Info. AdaLabel vs CE 17.00 1.33 12.5 28.33 6.33 1.17 7.33 13.67 AdaLabel vs LS 2.67 0.17 3.33 24.83 5.3 -0.67 3.17 8.50 AdaLabel vs FL 4.50 1.67 7.00 22.0 8.00 1.00 6.00 5.50 AdaLabel vs FACE 6.67 3.50 7.17 8.50 4.50 0.50 1.83 2.50 AdaLabel vs F 2 7.67 0.33 6.83 8.67 4.33 -0.50 1.67 9.50 AdaLabel vs CP 10.50 -0.17 8.00 23.83 8.00 1.50 6.17 16.83 AdaLabel vs UL 7.83 0.83 6.67 17.33 6.83 2.00 5.83 15.00 AdaLabel vs NL 9.17 2.67 9.17 7.67 5.17 0.17 2.17 15.5 AdaLabel vs D2GPo 0.83 0.00 3.33 15.17 3.17 7.33 1.00 6.33 Table 3: Pairwise human evaluation results (%). The absolute gains of AdaLabel (i.e., Win rate Lose rate ) are reported. , indicates significant improvement with p -value < 0 . 05 and < 0 . 005 , respectively (sign test). Model BLEU-3,4 Dist-1,2 Ent-1,2 LF 1.w/o 5.46 3.57 2.52 13.21 4.64 6.89 4.85 2.w/o 11.35 8.70 3.62 20.56 5.02 7.70 7.30 3.Orig. v 8.15 5.77 3.71 19.53 5.00 7.58 8.25 4.Uniform 5.66 3.61 2.24 14.96 4.84 7.33 4.98 5.Rand 6.27 4.07 2.03 13.47 4.7 7.08 4.56 6.BERT 11.6 9.34 3.67 20.97 5.02 7.71 7.28 AdaLabel 13.38 11.01 3.96 23.53 5.17 8.00 8.49 Table 4: Ablation study results on DailyDialog (%). don't know.",
"However, our model surpasses most baselines in terms of fluency while ensuring high diversity scores.",
"This demonstrates the superiority of our method in producing high quality responses.",
"2).",
"AdaLabel produces more coherent responses comparing to most baselines.",
"This verifies that our model does not sacrifice the response quality when achieving high diversity scores.",
"In fact, by controlling the model's confidence, more low-frequency words are encouraged, and thus AdaLabel can produce more relevant and coherent responses.",
"This claim is further verified by observing that our model achieves the best overall preference score among all the baselines.",
"The first group validates the effectiveness of the calculated target word probability, i.e., : 1).",
"w/o directly sets a fixed value for in Eq.",
"2. The specific value of is searched from 0.1 to 0.7 with a stride of 0.1; 2).",
"w/o omits the empirical factor in calculating , i.e., the value of in Eq.",
"2 is calculated using Eq.",
"4 in instead of Eq.",
"6.",
"The second group validates the effectiveness of the non-target word probabilities produced by D a , i.e., v : 3).",
"Orig.",
"v does not truncate the head of v when inferring from D a .",
"Note that the truncation for the tail of v is still applied since its effectiveness has already been proved in previous studies (Tang et al., 2020; Tan et al., 2019); 4).",
"Uniform uses an uniform distribution as v in Eq.",
"2. Note that different from the baseline LS , the value of is calculated using Eq.",
"6 in this ablation model, whereas the value of in the baseline LS is fixed ; 5).",
"Rand use a random distributions as v in Eq.",
"2; 6).",
"BERT follows the work of Chen et al. (2020) to fine-tune a pre-trained BERT model to produce v .",
"Note that our dialogue model may benefit from the multi-task training of D a since D a shares the same encoder with our dialogue model.",
"Optimizing Eq.",
"8 may help the encoder to capture better features.",
"For fair comparison, we kept the task of optimizing D a in ablation models 4-6 although it is not used to infer v .",
"Table 4 shows the results of ablation models on the DailyDialog dataset.",
"As can be seen from the first two rows, our method to adaptively calculate helps to improve the performance of our model by a large margin, and the empirical adjustment factor helps to further improve our performance by facilitating the learning of low-probability words.",
"The performance of ablation models 3-6 in Table 4 proves that v captures reliable distribution and helps our model produce more diverse responses.",
"Moreover, truncating the head distribution of v enables the dialogue model to focus more on the low-frequency words and thus facilitates more informative responses.",
"It is also interesting to note that our auxiliary decoder D a surpasses the BERT teacher used by Chen et al. (2020) in helping the dialogue model DailyDialog OpenSubtitles Auxiliary Decoder D a 64.03 64.92 Dialog Decoder in AdaLabel 44.16 43.90 Dialog Decoder in CE 38.58 41.57 Table 5: Prediction accuracy of decoders on test sets.",
"to produce more diverse responses.",
"This further proves the effectiveness of D a considering that BERT contains 6 times parameters than D a and consumes much more computation resources.",
"To further test the performance of D a , we evaluated the averaged accuracy score of D a when predicting each target word in the test set (first row in Table 5).",
"Specifically, a target word y t in the reference response is determined to be correctly predicted if it is top-ranked in the predicted distribution p ( | y <t , X ) .",
"A better decoder is generally believed to obtain a higher accuracy.",
"Table 5 also reports the uni-directional dialogue decoders' accuracy in AdaLabel and CE.",
"It can be seen that D a can make substantially more accurate predictions with the help of modeling bi-directional contexts using only one layer.",
"Moreover, the dialogue model's decoder in AdaLabel, which is guided by D a , achieves better accuracies than the CE.",
"This further proves that our light-weight D a is capable of producing effective v .",
"We also visualized the distribution of confidence scores assigned by each dialogue model to high-frequency words.",
"Figure 4 shows the results of [1, 200] [201, 400] [401, 600] [601, 800] [801, 1000] Token Frequency 0.0 0.5 1.0 1.5 2.0 2.5 % o f G e n e r a t e d T o k e n s AdaLabel FACE NL FL F2 CP CE UL LS D2GPo Figure 5: Ratios of low-frequency tokens in the generated responses on the OpenSubtitles dataset.",
"four best performing models on the OpenSubtitles dataset.",
"The spikes of high confidence score observed in Figure 4b and 4d indicate that CE and FACE assign extremely high confidence scores to a large number of high-frequency words.",
"Although the smoothed labels in LS manage to alleviate these high-confidence-spikes (Figure 4c), a considerable amount of words still receives high confidence scores in LS.",
"Our model outperforms all the baselines to avoid assigning over-confidence scores, thus alleviating the over-confidence issue.",
"A similar trend is also observed on the DailyDialog dataset (see Appendix D for results of all models on both datasets).",
"Over-confident models produce less diversified responses because they usually under-estimate rare words.",
"To evaluate the effectiveness of AdaLabel, we tested whether AdaLabel encourages more rare words in its generations.",
"Specifically, the ratio of generated tokens corresponding to different token frequency bins is calculated, and the results on the OpenSubtitles dataset are shown in Figure 5.",
"It can be seen that AdaLabel produces more rare words in the generated responses than other baselines.",
"Similar results are also observed on the DailyDialog dataset (see Appendix E).",
"We address the low-diversity issue of neural dialogue models by introducing an adaptive label smoothing approach, AdaLabel.",
"In our method, the probability of each target word is estimated based on the current dialogue model's prediction, and the probabilities for these non-target words are calculated using a novel auxiliary decoder D a .",
"A target-masked attention scheme is introduced in D a to help capture forward and backward contexts.",
"We evaluate our method on two benchmark datasets: DailyDialog and OpenSubtitles.",
"Extensive experiments show that our method effectively alleviates the over-confidence issue and improves the diversity of the generated responses.",
"As future work, we believe this method is extensible to other text generation tasks.",
"This work was partly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).",
"This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.",
"We thank Jinchao Zhang and Yao Qiu for early discussions and insightful comments of this work."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"other",
"other",
"other"
] |
[
"In this paper, we explore the ability to model and infer personality types of opponents, predict their responses, and use this information to adapt a dialog agent's high-level strategy in negotiation tasks.",
"Inspired by the idea of incorporating a theory of mind (ToM) into machines, we introduce a probabilistic formulation to encapsulate the opponent's personality type during both learning and inference.",
"We test our approach on the CRAIGSLISTBARGAIN dataset (He et al., 2018) and show that our method using ToM inference achieves a 20% higher dialog agreement rate compared to baselines on a mixed population of opponents.",
"We also find that our model displays diverse negotiation behavior with different types of opponents.",
"1 1 Introduction Developing dialog systems for negotiation is challenging since the task requires a combination of good communication skills and strategic reasoning capabilities (Traum et al., 2008; Young et al., 2013; Keizer et al., 2017).",
"While recent neural models (Wen et al., 2017; Dhingra et al., 2017; Zhou et al., 2019; He et al., 2018) have shown that useful dialogue strategies can be learned from offline corpora, they do not explicitly model the mental state of other agents, which can make it challenging to generate tailored strategies and utterances for different types of opponents.",
"In this paper, we introduce a new framework for generating strategic dialog inspired by the idea of Theory of Mind (ToM) from cognitive science (Premack and Woodruff, 1978; Bruner, 1981; Wimmer and Perner, 1983).",
"When negotiating with others, humans innately infer the intention of the Authors contributed equally.",
"other party, and guess how their own utterances would affect the opponent's mental state.",
"To emulate this capability in machines, we train a first-order ToM model to predict an opponent's response given the current state and the agent's own possible utterances.",
"This first-order ToM model can then be incorporated into dialog agents to enable one-step lookaheads during inference.",
"In order to predict future responses, we model the opponent's personality type as a intermediate variable ( z ), which can be predicted using the dialogue history.",
"We use this predicted personality, along with the previous state and utterance to calculate the likelihood of the opponent's next state for all possible actions that our agent can take in the current state.",
"This allows us to compute an expected value of return for each action, which is subsequently used to produce a policy for our agent.",
"We propose two variants of our ToM-based dialog agent an explicit version that outputs the opponent type as an intermediate prediction, and an implicit version that models the opponent type as a latent variable.",
"Both models can be instantiated as end-to-end neural networks and can be trained using reinforcement learning.",
"Our approach differs from existing opponent modeling work (Lee et al., 2018; Hadjinikolis et al., 2013; Oren and Norman, 2009; Rienstra et al., 2013; He and Boyd-Graber, 2016) in three aspects: 1) it provides strategic benefit during inference which leads to more successful negotiations, 2) it can flexibly adjust the degree of dependence on ToM predictions by changing a temperature parameter, and 3) it utilizes text utterances to infer types of opponents, thereby capturing side information ( e.g. , emotion) that is useful yet absent from standard dialog state transitions.",
"We perform experiments on a modified version of the CRAIGSLISTBARGAIN negotiation task (He et al., 2018), where the agent is matched with different opponents from diverse populations (e.g., cooperative, competitive, and aggressive negotia-tors), without being provided information about their identity.",
"Empirically, our method outperforms several baselines on the task by completing more deals and achieving higher utility.",
"For instance, our model achieves about 20% higher dialog agreement rate and utility than a baseline dialog manager trained with reinforcement learning.",
"Our analysis reveals that the agent demonstrates diverse negotiation behavior and adapts well to different types of opponents.",
"Speaker-follower models and rational speech acts.",
"Our work is related to recent papers using the Rational Speech Acts (RSA) model for natural language (Goodman and Stuhlmuller, 2013; Monroe and Potts, 2015; Goodman and Frank, 2016; Shen et al., 2019).",
"RSA has also been applied to language grounding (Andreas and Klein, 2016) and vision-language navigation (Fried et al., 2018).",
"Our first-order theory of mind modeling is different since we learn how the speaker's intent and utterance affect the opponent's reaction, instead of assuming the optimality of the listener in the speaker's mind.",
"Recent RSA model (White et al., 2020) considers speakers and listeners in resource-constrained settings, while we do not enforce constraints on opponents.",
"Our approach with explicit characteristic modeling is also similar to the ToMnet (Rabinowitz et al., 2018), which uses a multi-agent reinforcement learning setting to learn identity embeddings of populations from past trajectories, and predict the mental state of an agent using the current trajectory.",
"However, our first-order ToM models for negotiation also take utterances into account, which makes improving upon a base RL policy non-trivial.",
"Theory of Mind in dialog systems.",
"Theory of mind for modeling user personality types and predicting responses has been studied in the context of building user simulators (Georgila et al., 2006; Rieser and Lemon, 2006) for training RL-based dialog systems, and to make dialog systems explainable (Chandrasekaran et al., 2017).",
"Recent work on dialog policy learning has employed theory of mind with a focus on specific domains.",
"The Recursive Mental Model (RMM) (Roman et al., 2020) was proposed for navigation settings, where questions and answers are generated between a navigating agent and a guiding agent.",
"Another approach Answerer in Questioner's Mind (AQM) (Lee et al., 2018) tackled an answer guessing game with information-theoretic methods.",
"In these domains, the opponents are assumed to be cooperative, while our method is applicable for interacting with both cooperative and competitive opponents.",
"Recently, Jang et al. (2020) employed Bayesian-optimal Monte-Carlo planning for end-to-end dialog generation at the utterance level.",
"However, their method only models the latent goal of the opponent instead of potential responses like we do.",
"Opponent modeling in RL.",
"Apart from dialog systems, opponent modeling has been explored in other multi-agent reinforcement learning settings (Wen et al., 2019; von der Osten et al., 2017; He and Boyd-Graber, 2016; Hadjinikolis et al., 2013; Rienstra et al., 2013).",
"Our approach differs from these works by: 1) providing strategic benefit during real-time inference, 2) adjusting the degree of dependence on the ToM predictions through a temperature parameter, and 3) utilizing text utterances in the dialog to infer types of opponents, thereby capturing side information that is useful yet absent from standard state transitions.",
"Task.",
"We consider a task-oriented dialog setting where there are two agents, a buyer and a seller.",
"The buyer's goal is to purchase the listed item with minimum cost, and the seller's goal is to sell the item at a price as high as possible.",
"The item description is public for both agents, while the target prices are private for both buyer and seller.",
"Two agents negotiate in alternating turns until they conclude with an agreement or disagreement.",
"MDP Formulation.",
"We formulate the negotiation process between two agents as a multi-agent Markov Decision Process (MAMDP), (cid:104)N , S , A , P , R , , n (cid:105) .",
"N = { 1 , 1 } is the set indicating two agents ( buyer=-1 / seller = 1 ).",
"A is the action space consisting of dialog acts .",
"For example, a valid dialog act a it A can encode the intent ( inform , propose , counter , etc.) and price that the agent i tries to express in the t -th round.",
"Two agents act alternatively, i.e., if at the round t only the agent i moves, then at the round t + 1 only the agent i moves.",
"S is the state space consisting of the negotiation status.",
"We define s 0 S as the initial status of the dialog, which contains the information about items HERO4 Black Camera Standard Housing 131', Rechargeable Battery, Flat Adhesive Mount, 3-Way Pivot Arm GoPro Hero4 Black + Battery BacPac Price: $265 Buyer: Yes, I am interested.",
"to be negotiated (e.g., initial price, description).",
"We also define s t = ( s 0 , a i 1 , a i 2 , . . . , a it 1 , a i t ) .",
"In this way, the only randomness of the environment comes from the opponents policy ( s t 1 a i t ), i.e., s t 1 s t is stochastic, while ( s t 1 , a i t ) s t is deterministic.",
"Note that the state s t is only partially observable in reality, since one can only infer the true intent from the corresponding utterance.",
"We provide a summary of all the symbols used in Table",
"1. 3.1 Negotiation Systems As illustrated in Figure 1, our negotiation system encapsulates three important modules following traditional goal-oriented dialog systems (Young et al., 2013): A parser that converts the opponent's utterance u i t 1 to dialog act a i t 1 (e.g., Are you interested in this GoPro con-firm(price=None) ).",
"Since the dialog acts in our system do not intend to capture the complete semantics of a sentence, a simple rule-based parser is effective; A manager that decides the responding dialog act a it according to the current dialog state s t 1 = ( s 0 , . . . , a i t 1 ) .",
"Our ToM model is applied to this component of the system; A generator that produces natural language response u it based on the current dialog act a it and the dialog state s t 1 , or equivalently s t (e.g., the previous dialog state + pro-pose(price= $ 230) How does $230 for the GoPro sound? ).",
"It can be either deterministic to reduce computational cost or probabilistic to encourage diversity in language.",
"Following (He et al., 2018), the parser and the generator modules are obtained by rule-based method or supervised learning in advance, and fixed when training the dialog manager using supervised learning (SL) or fine turning using reinforcement learning (RL).",
"The SL dialog manager employs a neural network to model state transitions P ( s t | s t 1 ) (or equivalently, ( a it | s t 1 ) ) of the training corpus by minimizing the cross entropy loss.",
"The RL dialog manager further fine tunes the SL model by maximizing a composed reward function with reinforcement learning.",
"The learned dialog policy ( a it | s t 1 ) can be further improved by enforcing some hand-craft rules.",
"There are two main problems with the SL or RL manager.",
"First, the policy learned by an RL-based dialog manager produces reactive responses (Tamar et al., 2016) , which are usually inadequate in a long term planning problem requiring more Symbol Definition N = { 1 , 1 } Identities of the two players ( buyer = -1 / seller = 1 ) s 0 S Initial state of the dialog (e.g., list price, description).",
"strategic thinking, such as negotiation.",
"Second, it does not take the effect of the agent's generated utterances on opponents' reactions into account.",
"To address these problems, we propose an approach to incorporate the theory of mind (ToM) (Premack and Woodruff, 1978) into the inference process.",
"This enables one-step looking ahead to consider the effect of the agent's utterances and generate more thoughtful strategies.",
"The goal of the first-order theory of mind is to predict how a dialog act and an utterance generated by us would affect the reaction of the opponent.",
"As illustrated in Figure 1, suppose that our current dialog state is s t 1 , which consists of the history of past dialog acts and the initial information, as well as the current utterance u i t 1 from the opponent.",
"The ToM model simulates the situations where we take dialog act a it ( e.g. , propose(price= $ 230) ) and utter the sentence u it ( how does $ 230 for it sound ), and estimates the probability distribution of the opponents response a it +1 .",
"By combining actions and states by definition, our first-order ToM model estimates the transition probability T ( s t +1 | u i t 1 , s t , u it ) .",
"mild words when countering) and strategies ( e.g., tend to insist on their target price or agree to a com-promise).",
"The first-order ToM can either implicitly capture these personalities by learning the transition T ( s t +1 | u i t 1 , s t , u it ) , or explicitly infer the type of the opponent's personalities z i first, from the past interaction and the opponent's utterance, i.e., learning an identifier z i t 1 = f ( s t 1 , u i t 1 ) , and then learns the transition based on that information, i.e., T ( s t +1 | z i t 1 , s t , u it ) , to make accurate prediction about opponents reaction.",
"We introduce a policy with an explicit first-order ToM model T ( s t +1 | z i , s t , u it ) , where the opponent's personality z i can be estimated from partial dialog.",
"During training, the ground truth of the type of opponents personalities, z , is given.",
"Therefore we can train an identifier z i t 1 = f ( s t 1 , u i t 1 ) with extra supervision to predict the opponents type every round.",
"During the inference process, the probability of taking action a it , i.e., a policy ToM ( a it | s t 1 , z i t 1 ) , is proportional to exp 1 (cid:88) u i t G ( u it | s t ,z i t 1 ) (cid:124) (cid:123)(cid:122) (cid:125) Generator (cid:88) s t +1 T ( s t +1 | z i t 1 ,s t ,u it ) (cid:124) (cid:123)(cid:122) (cid:125) 1 st -order ToM V ( s t +1 ) (cid:124) (cid:123)(cid:122) (cid:125) Value Fn.",
", where the exponent can be interpreted as the expected best return over opponent's next moves, after taking action a it at state s t 1 (compressed as s t ).",
"In the above expression, T ( s t +1 | z i t 1 , s t , u it ) is the explicit first-order ToM model , which can be trained by supervised learning from the corpus; G ( u i t | s t , z i t 1 ) is the generator which renders utterance conditioned on the current state and the personality of the opponent; V ( s t +1 ) , is the value function estimated by the RL-based dialog manager, which gives the best future return estimation supposing the current state is s t +1 .",
"It approximates V ( s t +1 , z i t 1 ) when it is nearly optimal.",
"is the temperature parameter .",
"Since ToM is normalized as a Boltzmann distribution, when temperature , ToM is a uniform distribution over the next states; when 0 , ToM is nearly deterministic assigning most probability mass to the s t with the largest expected value after one-step ToM looking ahead.",
"We also introduce first-order ToM policy with implicit personality modeling, where we do not have a module explicitly which explicitly predicts the opponent identity z .",
"Instead, we combine the identifier and ToM model in the explicit version, to directly learn T ( s t +1 | u i t 1 , s t , u it ) without extra supervision.",
"In this case, ToM ( a it | s t 1 , u i t 1 ) is proportional to exp 1 (cid:88) u it G ( u it | s t ) (cid:124) (cid:123)(cid:122) (cid:125) Generator (cid:88) s t +1 T ( s t +1 | u i t 1 , s t , u it ) (cid:124) (cid:123)(cid:122) (cid:125) 1 st -order ToM V ( s t +1 ) (cid:124) (cid:123)(cid:122) (cid:125) Value Fn.",
"where T ( s t +1 | u i t 1 , s t , u it ) is called the implicit first-order ToM model , and the rest of components are similar to the explicit version.",
"We call ToM a first-order ToM policy, because it utilizes the first-order transition of the opponent, and estimates the expected outcome of performing a certain action which leads to state s t .",
"The personalities of the opponent are implicitly inferred from the previous utterance u i t 1 and the history s t .",
"In practice, the summation (expectation) is approximated by Monte Carlo sampling.",
"Implicit vs Explicit model.",
"We expect both explicit and implicit ToM models to provide several unique benefits.",
"First, co-training the identifier f ( s t 1 , u i t 1 ) and the explicit first-order ToM model T ( s t +1 | z i , s t , u it ) is expected to have better sample efficiency than the implicit ToM model T ( s t +1 | u i t 1 , s t , u it ) since it utilizes the prior knowledge that personality identity affects state transition, and is trained with more supervision.",
"Besides, with the personality z i , the generator and the value functions can also adapt to different populations of opponents.",
"However, the annotations for opponent types are not available for all corpora, therefore the implicit model would be a more general approach.",
"After learning the above two ToM models from the corpus, we leverage the pre-trained RL policy as a prior with the 1st-order ToM policy to perform the inference .",
"The final policy is given by ( a it | s t 1 , z i t 1 ) rl ( a it | s t 1 ) ToM ( a it | s t 1 , z i t 1 ) , where rl is a policy obtained in a previous RL training process (see Section 5).",
"From a Bayesian point of view, rl can be seen as a prior P ( a it | s t 1 ) , and the ToM is analog to the likelihood P ( best return | a it , s t 1 ) by its definition (not strictly true since it has to be summed up to one) which modifies the probability assignment in rl , i.e., the posterior P ( a it | best return , s t 1 ) .",
"This gives the probability that the current agent should move to s t in order to reach the highest return in the end.",
"ToM modifies the probability assignment in rl , when in ToM , it is equivalent to the original RL policy rl .",
"We compare three hybrid dialog managers combining neural networks and rules to control the flow of dialog:",
"(1) The SL+rule manager employs a LSTM-based network to learn the transitions from s t 1 to s t from corpus.",
"Rules ensure that only deals meeting 70% target are acceptable.",
"(2) The RL manager uses an actor-critic method (Mnih et al., 2016), which contains a policy network with the same neural network architecture as the SL manager, and a value network predicts the future returns given states.",
"(3) The ToM manager uses the first-order ToM policy as described in Section",
"4. to learn the best response policy ToM ( a it | s t 1 , u i t 1 ) which is aware of the opponent's personalities and mental state.",
"An extra LSTM model is used to encode u 1 t 1 in both explicit and implicit ToM models, and learn the personality z i t 1 = LSTM ( u 1 t 1 , s t 1 ) in explicit ToM models which encodes a distribution.",
"Note that for all three managers, we applied reasonable hand-crafted rules to prevent unreasonable policies.",
"Specifically, the agent will never offer a price below its bottom line and will reject the opponent's offer if it is worse than its bottom line.",
"which is a linear combination of the cross entropy loss between the predicted intent and the ground truth intent, and the mean squared error between the predicted price and the ground truth price.",
"The Dialog Act Definition Example greet say hello or chat randomly.",
"reinforcement learning ( RL ) manager is then fined tuned from the SL manager to maximize a reward function described in Section 6, with the actor-critic methods (Mnih et al., 2016).",
"The actor network is initialized as the SL manager's LSTM-based network, and the critic network is partially initialized with the same network, followed by a MLP to predict the value.",
"For the ToM manager, we reuse V ( s t +1 ) from a well trained RL manager's critic network, and fix it during inference.",
"The implicit first-order ToM model T ( s t +1 | u i t 1 , s t , u it ) is directly trained via supervised learning to minimize the same loss LSL .",
"For the explicit first-order ToM model , T ( s t +1 | z i t 1 , s t , u it ) , we first train a LSTM-based identifier z i t 1 = f ( s t 1 , u i t 1 ) , which receives ground truth opponent personality z i from the corpus during training.",
"T ( s t +1 | z i t 1 , s t , u it ) is learned with the input from the well-trained identifier.",
"To obtain the 1st-order ToM policy for the inference, we approximate the sum (expectation) in ToM by Monte Carlo sampling with the generator, and discretize the price in a normalized price range.",
"In practice, we found quantizing the price range with 100 units is a good balance between time com-sumption and the quality of approximation.",
"We test our ToM negotiation framework on the CRAIGSLISTBARGAIN (He et al., 2018), which contains 6682 human-human dialogs between a buyer and a seller alternately bargaining for the price of an item on Craigslist.",
"Ontology.",
"We redesign the ontology of the CRAIGSLISTBARGAIN dataset to support a more diverse dialog act than the original coarse dialog acts (He et al., 2018), which can reflect more ways of mental state change in a negotiation.",
"We used the Microsoft Language Understanding Intelligent Service (LUIS) to relabel the dataset , and merged some similar label types, such as insist and vague-price into counter-noprice , and intro and great into greet .",
"All fifteen dialog acts after our mod-ifications are in Table",
"2. There are four intents propose , counter , agree , disagree that must be followed by a price slot, and four terminal acts accept , reject , and quit.",
"When an agent takes an oer action, the other agent has to respond with accept or reject .",
"Note that the function of this dialog act is not to capture the full semantic meaning of one utterance, but to serve as a logical skeleton for the dialog.",
"Reward function design.",
"We set the reward r i for the agent i to be a linear function of the final price, such that the buyer achieves maximal reward of 1 at its target price, the seller achieves maximal reward of 1 at the listing price, and both agents receive zero rewards at the midpoint of the listing price and the target price.",
"When there is no deal, both agents receives equivalent penalty.",
"Diverse opponent populations.",
"All our negotiation experiments are conducted against variations of the SL+rule manager as the opponent.",
"For the variations, we create 7 different opponent populations (id=0 6) by injecting different rules for changing prices and rendering utterances.",
"Price changing rules are functions of the number of sentences in the conversation history, which model the agreeability and the flexibility of a person.",
"When rendering utterances, we use a template-based language generator as in (He et al., 2018), and insert population-specific tokens in utterances by sampling according to different opponent types.",
"The cooperative population (id=5) will gradually compromise and move its price from the midpoint.",
"The utterances of this population also contain more polite and mild words indicating its negotiable position.",
"The most aggressive population (id=0) will insist its price until the end, and utters more stubborn words.",
"The competitive population (id=6) compromises from target price slower than the cooperative .",
"The other populations will follow price changing curves in between these two extremes, and also have different language properties.",
"The population types are accessible during training as ground truth values of z i to provide supervision (see Appendix A for details).",
"Models.",
"The dialog managers we compare are described in Section 5.",
"For the utterance parser , we use Microsoft Language Understanding Intelligent Service (LUIS) (Williams et al., 2015) with 10 annotated training examples for each dialog act.",
"For the Generator , we use a retrieval-based model similar to He et al., 2018 which samples an utterance from the top 10 matched templates.",
"where P deal is the final deal price, the and total price range P = P itarget P i target , where P itarget , and P i target are the extreme target prices of the two agents.",
"Note that this is different from the subjective utility of each agent based on only its own price range, which may result in utilities > 1 or < 0 more often.",
"Deal fairness ( Fa ), which is only for completed deals, as Fa i = 1 2 | Ut i 0 .",
"5 | .",
"Improvement of dialog policy.",
"We evaluate SL+rule , RL , and our ToM model on a mixed population for 4352 dialogs, which contains about 630 dialogs for each population.",
"As shown in Table 3, our explicit ToM model consistently achieves the highest agreement rate ( Ag ), with 56%, 4%, and 20% improvements compared to vanilla RL against cooperative, competitive, and mixed populations, respectively.",
"Though deal agreement is hard for competitive opponents, our explicit ToM model achieves more than 30% improvement on the deal utility when interacting with this population.",
"On the mixed population, the reward ( Re ) for SL+rule agent is low, as it is not directly optimized for better reward.",
"RL agent improves the Re a lot compared with the SL+rule baseline.",
"However, both ToM agents achieve better reward even when compared with RL agent, which shows the advantage of strategic planning.",
"Besides, unlike the SL+rule only pursues high utility when there is a deal, but ends with every low Ag , our ToM models best balance both the agreement rate and agent utility of each dialog, and outperforms SL+rule and RL for all populations.",
"Implicit vs. explicit models.",
"We found that the implicit ToM model can also achieve better Ag and Ut than the baselines for all populations.",
"But the overall performance is slightly worse than the explicit ToM model.",
"This can be explained by the fact that the explicit model has more information about the population type during training.",
"One may worry about the potential error cascade issue the explicit ToM models, as we see in Figure 2, Method Cooperative Opponents (id=5) Competitive Opponents (id=6) Mixed Population (id=0 6) Ag Ut Fa Len Ag Ut Fa Len Ag Ut Fa Len Re SL+rule 0 .",
"the top 1 accuracy of the identifier in the explicit model is only 69%, though it is significantly above the chance.",
"Our experiment show that even with an imperfect identifier, the explicit model can still outperform an implicit model, which is directly optimized for better performance.",
"Population-aware strategies.",
"As Table 3 shows, the ToM model can provide more deal fairness ( Fa , normalized price difference to the midpoint) to competitive opponents, since they rarely compromise, meanwhile reaching higher Ag and Ut .",
"When opponents are cooperative and easy to negotiate with, our ToM model can achieve much better agent utility by taking advantage of losing some dialog fairness.",
"This implies our ToM model is able to utilize different characteristics of the opponents in the strategy generation.",
"We provide some sample dialogs from the explicit ToM model in Table",
"4. When the seller is competitive, the buyer can adaptively raise its price and exchange for additional benefits, e.g., ok. i can do $46 if you split the shipping in half , to make the deal happen.",
"We note that sometimes the offer prices slightly deviate from the agreed prince in negotiation but the ToM agent still accepts.",
"This may be because the deflects of SL-based opponents is predictable to the ToM agent.",
"Effectiveness of the opponent identifier.",
"Figure 2 shows the identifier can capture the opponent identities well during interaction.",
"The accuracy of the identifier increases as the dialog progresses.",
"The top 1 accuracy after 6 opponent's turns is above 69%, and the top 3 accuracy is above 84%, where the chance is only 14.2%.",
"The average top 1 accuracy is 43.8% for all turns in 5000 dialogs of different lengths.",
"We also find the explicit ToM models can better prevent overfitting than implicit models.",
"More details are in appendix B. Visualization of population embeddings.",
"normalized latent variables in both explicit and implicit ToM models.",
"The latent variables are extracted from one layer before the output of the identifier or its equivalence in the implicit model.",
"The explicit ToM model learns embeddings encoding different opponent populations, as the major variances of variable are captured by the difference of opponent populations.",
"However, without extra supervision, the extraction of the population identity is difficult in the implicit ToM model.",
"Further analysis shows that the variances of the latent variables in the implicit ToM model are mainly explained by intent types.",
"We include more detailed analysis and t-SNE visualization in appendix B. 8 Conclusion In this work, we proposed a novel framework to integrate the concept of Theory of Mind (ToM) into generating task-oriented dialogs.",
"Our approach provides the ability to model and infer personality types of opponents, predict changes in their mental state, and use this information to adapt the agent's high-level strategy in negotiation tasks.",
"We introduced a probabilistic formulation for first-order ToM and introduce two ways to incorporate it into a dialog agent, by 1) explicitly and 2) implicitly modeling the personality of the opponent.",
"We tested our approach on a modified version of the CRAIGSLISTBARGAIN dataset (He et al., 2018) with diverse opponents.",
"Our experiments show that our method using ToM inference achieves about 20% higher dialog agreement rate and utility compared to baselines on a mixed population of opponents.",
"When negotiating with the cooperative opponents, the improvement of agreement rate is 54% .",
"Some directions for future work include developing efficient schemes to approximate the value computation for future states, exploring higher orders of ToM, as well as a tighter integration of ToM into utterance generation and processing.",
"Our dataset is modified from the open-sourced CRAIGSLISTBARGAIN dataset (He et al., 2018), which consists of negotiation dialogs between sell-ers and buyers on items from the Craigslist website.",
"The initial dataset was collected using crowd workers on Amazon Mechanical Turk (AMT) playing the role of buyers and sellers.",
"We redesigned the ontology to support more diverse dialog acts than the original coarse dialog acts.",
"We manually labeled 10 examples for each intent, and used the Microsoft Language Understanding Intelligent Service to relabel the whole dataset.",
"We create seven different populations by injecting different rules about changing prices and rendering utterances.",
"Our paper involves an NLP application that can negotiate with people to reach agreement on deals.",
"It is still at an early exploration stage so we do not expect it will currently cause any negative social impact such as massive job loss.",
"If a mature version of such a system is deployed in the future, it may lead to less fair deals between the AI system and humans, as the system is optimized to find the best strategy that maximizes its own utility.",
"But overall, we believe it will encourage market efficiency.",
"We thank Robert Hawkins, Jens Tuyls, Vishvak Mu-rahari, Howard Chen and members of the Princeton NLP group for helpful discussions and feedback.",
"This research was supported by an Amazon Research Award."
] | [
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"other",
"method",
"result",
"result",
"objective",
"other",
"method",
"other",
"objective",
"method",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Machine learning solutions are often criticized for the lack of explanation of their successes and failures.",
"Understanding which instances are misclassified and why is essential to improve the learning process.",
"This work helps to fill this gap by proposing a methodology to characterize, quantify and measure the impact of hard instances in the task of polarity classification of movie reviews.",
"We characterize such instances into two categories: neutrality , where the text does not convey a clear polarity, and discrepancy , where the polarity of the text is the opposite of its true rating.",
"We quantify the number of hard instances in polarity classification of movie reviews and provide empirical evidence about the need to pay attention to such problematic instances, as they are much harder to classify, for both machine and human classifiers.",
"To the best of our knowledge, this is the first systematic analysis of the impact of hard instances in polarity detection from well-formed textual reviews.",
"Document-level polarity classification is the task of classifying the polarity of a whole opinionated message (Pozzi et al., 2016).",
"For instance, given a movie review, the system determines whether the review text expresses an overall positive, negative, or neutral opinion about the movie.",
"Although polarity classification naturally suits to analyze consumer opinions about products and services (Gui et al., 2017), it is also well suited to various types of applications, such as to infer votes in elections (Goldberg et al., 2007), civilian sentiment during terrorism scenarios (Cheong and Lee, 2011), citizens' perception of government agencies (Arunachalam and Sarkar, 2013) and recommendation systems (Zhang, 2015).",
"Supervised machine learning is one of the most common and successful approaches for polarity classification, but even state-of-the-art methods fail to correctly classify a substantial portion of the instances, from 10% to 20% , depending on the dataset (Ribeiro et al., 2016).",
"The problem with this approach is that if the data is not representative and reliable, the model is unlikely to perform well.",
"One source of unreliability is data noise, which can be categorized into class noise and attribute noise (Gupta and Gupta, 2019).",
"Class noise occurs when the training data contains instances that are wrongly labeled.",
"Attribute noise occurs when the training data contains one or more attributes with wrong, incomplete or missing values.",
"In the case of textual data, such noise usually comes in the form of errors in language rules, such as typos, grammatical errors, improper punctuation, and abbreviations (Agarwal et al., 2007; Michel and Neubig, 2018; Lourentzou et al., 2019).",
"Nevertheless, for both cases, the noise can be eliminated from the data by correcting the labels (for class noise) or the problematic text (for attribute noise).",
"A more problematic source of data unreliability in polarity classification tasks comes from well written text that, for some reason, does not convey its class clearly.",
"Literature calls such instances hard instances , which are those that are intrinsically hard to correctly label or classify (Smith and Martinez, 2011; Beigman Klebanov and Beigman, 2014).",
"Differently from noisy instances, hard instances cannot be corrected, so the only solution is to identify and remove them from the training data.",
"Also, hard instances are not equivalent to outliers, as they do not differ significantly from other observations and may represent a significant portion of the data (Smith et al., 2014).",
"For example, in a polarity classification task, a positive movie review that describes at least as many negative as positive points of the film can be a hard instance .",
"To the best of our knowledge, no study exists that characterizes such instances and quantifies their impact on document-level polarity classification tasks.",
"quantify and measure the impact of hard instances in polarity classification tasks and demonstrate its usefulness in the task of movie review polarity classification.",
"To this end, we collected 415 , 867 positive and negative movie reviews from Metacritic.",
"One advantage of Metacritic is that the meaning of ratings is clearly stated to the users when a review is being submitted: positive ratings range between 61% and 100% , neutral range between 40% and 60% , and negative between 0% and 39% .",
"Because of that, class noise and biases should be rare, that is, a user who liked (disliked) a movie will very unlikely give a negative (positive) rating to it.",
"Thus, classification errors will mostly be due to hard instances , which we assign into two disjoint categories: neutral and discrepant .",
"A neutral review does not have a clear polarity and a discrepant review has a human-perceived polarity that is different from its associated rating.",
"This categorization is complete, i.e., every instance that, for a human, does not reveal its class clearly falls into one (and only one) of these two types of hard instances .",
"Neutral and discrepant reviews are characterized by a well-defined human classifier that uses human reasoning to infer the class of the example.",
"When the class assigned by the human classifier is incorrect, we label the review as discrepant , i.e., the human-perceived polarity of the text is different from its associated rating.",
"When the human classifier is not confident about its prediction, we label the review as neutral .",
"We labeled 1 , 200 reviews and found 198 neutral and 64 discrepant reviews.",
"We tested state-of-the-art machine classifiers on these reviews and results revealed that hard instances can significantly decrease their performances.",
"In short, the main contributions are: A simple and reproducible methodology based on a well-defined human classifier to characterize and identify hard instances on polarity classification tasks (Section 3); A thorough analysis of the impact of hard instances in the task of movie review polarity classification (Section 5.2); Publicly available datasets of movie reviews describing the expected amounts of five classes of hard instances (Section 5.1).",
"As an additional contribution, we show how far are state-of-the-art machine classifiers from human performance in the task of movie review polarity classification.",
"In supervised machine learning, class and attribute noise can increase learning complexity and, consequently, reduce classification accuracy (Zhu and Wu, 2004).",
"Class noise is considered to be more harmful than attribute noise (Frenay and Verleysen, 2014), but it is easier to detect (Van Hulse et al., 2007).",
"Thus, class noise is more often addressed in the literature (Gupta and Gupta, 2019), where several studies analyzed its impact in classification tasks and how to address it (Natarajan et al., 2013; Hendrycks et al., 2018; Liu et al., 2017; Rehbein and Ruppenhofer, 2017; Jindal et al., 2019).",
"In NLP, attribute noise are unintended errors in text, which can come from failures in automatic character recognition processes (Vinciarelli, 2005) or naturally while writing the text in the form of errors in language rules, such as typos, grammatical errors, improper punctuation, irrational capitalization and abbreviations (Agarwal et al., 2007; Contractor et al., 2010; Dey and Haque, 2009; Florian et al., 2010; Michel and Neubig, 2018).",
"In short, noise are unintentional and undesirable errors in the text that can (and should) be eliminated from the data.",
"Conversely, hard instances are noise-free and cannot be corrected, only eliminated from the data (Smith and Martinez, 2011).",
"In addition, they differ from outliers because their feature representation vectors may be similar to others from regular instances (Smith and Martinez, 2011).",
"Nevertheless, hard instances are more prone to class noise.",
"In fact, Beigman Klebanov and Beigman (2009) defined hard instances in the context of label annotations, under the assumption that items that are easy are reliably annotated, whereas items that are hard display confusion and disagreement among the annotators.",
"Later, Beigman Klebanov and Beigman (2014) showed that the presence of hard instances in the training data misleads the machine learner on easy, clear-cut cases.",
"The definition of Smith et al. (2014) is similar to ours: hard instances are simply those that should be misclassified\" by machine learning methods. The authors introduced hardness measures based on the outputs of an ensemble of classifiers to identify such instances and showed that classifiers are often uncertain about their classes. Following the same idea, Krymolowski (2002) argues that easy instances are correctly classified by all or most classifiers. On the other hand, hard instances are missed by most of them. In this work, we propose a human classifier composed by human annotators to identify hard instances . Our definition unifies the ones of Beigman Klebanov and Beigman (2009, 2014) and Smith et al. (2014). Similarly to Beigman Klebanov and Beigman (2009, 2014), we define hard instances as those in which the human classifier is uncertain or wrong about their true labels. However, different from these studies, which quantify the impact hard instances have on training, our goal is to provide a methodology to quantify the expected amount of hard instances in data and the impact they have on classifiers in production and testing. Also, and similarly to Smith et al. (2014), hard instances are divided into instances that should be misclassified, which we call discrepant , and border points, which we call neutral .",
"To the best of our knowledge, we are the first to propose a methodology to characterize and quantify the impact of hard instances in unstructured textual data for polarity classification tasks.",
"Regarding the effect of hard instances in sentiment and polarity classification tasks, Bermingham and Smeaton (2010) showed that it is easier to classify sentiment in short documents (e.g. tweets) than in longer ones, as short documents have less non-relevant information.",
"Also, Valdivia et al. (2019) showed that ratings in TripAdvisor reviews are not strongly correlated with sentiment scores given by sentiment analysis methods and proposed a unified index that aggregates both polarities.",
"Barnes et al. (2019) collected a subset of sentences that an ensemble of state-of-the-art sentiment classifiers misclassified and annotated them for 18 linguistic and paralinguistic phenomena, such as negation, sarcasm, among others.",
"In our work, we analyze manually identified hard instances (as opposed to instances misclassified by a machine classifier).",
"As a result, compared to these works, we have a more precise (e.g., a misclassified instance is not necessarily hard) and complete (e.g., not all hard instances are misclassified) ground-truth.",
"Problem Setting.",
"In this work, we focus on the problem of polarity detection of movie reviews, but all the methods can be applied to any document-level polarity classification task.",
"More formally, in a dataset D = ( X, Y ) composed by a set of textual movie reviews X and their corresponding binary ratings Y , each review x i X is associated with a score (or rating) y i Y that can be either 0 ( positive ) or 1 (negative).",
"For the aims of this paper, it is important that D does not contain any movie reviews that have been explicitly associated with a neutral score by their author, e.g. a score of 50 on Metacritic .",
"By doing this, we isolate hard instances from explicit neutral reviews, avoiding class noise and biases.",
"Our methodology is composed by a human classifier f H , which identifies hard instances , and a machine classifier f M , which is tested on hard and regular instances.",
"A classifier is defined as a function f ( x i ) that receives a textual movie review x i as input and returns its polarity y i { 0 , 1 } .",
"We use the human classifier to assign a label l i to a large sample of movie reviews x i to indicate whether x i is a hard instance or not.",
"This label can be one (and only one) of a set L of manually defined labels that indicate that the instance is regular or a type of hard instance .",
"With that, we will be able to quantify the impact of hard instances on machine classifiers and provide explanations about why they occur and how to avoid them in order to improve machine classifiers' accuracy.",
"More specifically, for a machine classifier f M and for all labels l L , regular included, we will calculate the probabilities P ( l i = l | y i (cid:54) = y i ) and P ( y i = y i | l i = l ) .",
"Types of Hard Instances.",
"A strong premise of this work is that the dataset D has no (or negligible) class noise, i.e., all polarity scores y i Y reflect the real opinion of the reviewer.",
"To guarantee that, one needs to construct D using movie reviews from systems like Metacritic or Rotten Tomatoes , which have well defined meanings for the scores, which are always visible to the reviewers.",
"Thus, every time the polarity of text x i is inconsistent with its true score y i , we assume that x i is a hard instance .",
"More specifically, we define two possible hypotheses explaining the hardness of the text x i , i.e., two disjoint types of hard instances : (1) the text does not have a clear polarity, namely neutrality , and (2) the text has a clear polarity, but its score y i is the opposite one, namely discrepancy .",
"A movie review x i is a hard instance of type neutrality when its polarity is not clear.",
"We define three labels for neutral hard instances : mixed (text has mixed opinions), factual (text is purely factual) and contextual (polarity needs context).",
"The mixed label considers reviews that describes both positive and negative points about the movie without having the overall opinion clearly stated.",
"One real example is: as dumb as the film is, the actors escape relatively unscathed.",
"The factual label defines non-opinionated reviews that describes only facts about the movie, such as: it is a movie about the World War II and its consequences on the lives of those who survived.",
"The label contextual characterizes reviews where context is needed to understand its polarity, including those containing irony and sarcasm.",
"One real example is: ultimately, Collin's film is one of forgiveness and that's not the usual way great tragedies end.",
"Finally, the label hard_undefined is given to reviews where the reasons for the lack of polarity are not clear.",
"The second type of hard instance , namely discrepancy , is given to reviews where the polarity of its text x i is the opposite of the polarity of its score y i .",
"For this type, we define a single label: discrepant (polarity of text and score are discrepant).",
"As an example, consider a highly acclaimed movie of a prestigious director, such as Martin Scorsese.",
"Now, consider a reviewer who liked this movie, but unlike the vast majority of critics, found many points that prevent her from giving it a perfect score.",
"Thus, the text will mostly be about its negative points to justify why she is not giving the expected perfect score.",
"Consequently, the text review will appear negative although the score is positive.",
"The following textual review has a clear negative polarity although its score y i is positive: Thoroughly predictable from start to finish.",
"For more examples, see Tables 4 and 5 in the Appendix.",
"Human Classifier.",
"A fundamental building block of our methodology is the human classifier f H .",
"Human classifiers are often considered to be the upper bound in terms of performance of classification tasks (Stallkamp et al., 2012; Ciresan et al., 2012; Geirhos et al., 2018), which means that when it makes a prediction error, machine classifiers will most likely also miss.",
"Moreover, when a human classifier working on its full capacity makes a mistake, and the class label is correct (i.e. no class noise), then what caused the error is most likely a hard instance (Beigman Klebanov and Beigman, 2014).",
"We use this premise to define the two types of hard instances discussed in the previous section.",
"In the task of polarity classification of movie reviews, a human classifier mistake can be due to two causes: (C1) the text of the review x i is not clear about its polarity y i , or (C2) the score y i is different from the (clearly) perceived polarity of x i .",
"In other words, the human classifier f H can be characterized by two binary features when executing this task: whether it is confident about its prediction (F1) and whether it correctly classified the polarity of the review x i (F2) .",
"Thus, when it makes a mistake, if it was not confident, an error of type C1 occurs, and when it was confident, an error of type C2 occurs.",
"The first one (C1) is associated with a hard instance of type neutrality , whereas the second one (C2) is associated with a hard instance of type discrepancy .",
"Also, while the second only occurs when the human classifier f H makes a mistake, the first occurs every time f H is not confident, i.e., it is independent of the prediction y i .",
"With the aforementioned rationale, we are ready to propose a well-defined human classifier f H to identify hard instances in movie reviews.",
"First, and in order to construct a robust classifier, f H is an ensemble composed by three independent human classifiers f h 1 , f h 2 and f h 3 .",
"In other words, we will use three annotators to label a movie review x i in terms of its polarity and hardness 1 .",
"Each annotator j { 1 , 2 , 3 } is asked to classify the reviews in two levels.",
"First, they are asked to make a prediction y ji , i.e., to classify the polarity of the review x i as positive or negative .",
"Second, they are asked to indicate whether they are confident or not about their classification y ji .",
"We denote the confidence of annotator j on review x i by c ji { 0 , 1 } , where c ji = 1 if j is confident and c ji = 0 otherwise.",
"If c ji = 0 , then we assume that x i does not contain sufficient information for j to infer its polarity, that is, x i is a hard instance of type neutrality .",
"So, annotator j is asked to choose one label l ji that fits best to the neutrality of x i , which can be either mixed , factual or contextual .",
"On the other hand, if c ji = 1 , then l ji is set to regular .",
"This process is illustrated in Figure",
"1. Of course, each annotator j is independent and cannot see the others' responses.",
"At the end of this process, for each instance x i , we will have three annotation triples ( y ji , c ji , l ji ) , where y ji { 0 , 1 } ( positive or negative ), c ji { 0 , 1 } ( not confident or confident ) and l ji LN = { mixed , factual , contextual , regular } .",
"Assuming that all annotators are equally skilled, we aggregate these annotations using majority voting to set the outputs of our human classifier f H .",
"For the polarity y i and the confidence c i , the aggregation is straightforward, as described in Equations 1 and 2: 1 In practice, any number of annotators can be used, including just one.",
"Setting the final hard instance label l i of review x i is more involved.",
"Let L i = [ l 1 i , l 2 i , l 3 i ] be the list of labels l ji given by the annotators to review x i (e.g. L 1 = [ mixed , mixed , regular ] ) and N ( l, L i ) the number of elements of L i that are equal to label l (e.g. N ( mixed , L 1 ) = 2 ).",
"Then, l i is the majority vote if at least two annotators (the majority) gave that label to x i and, if not, l i is set to hard_undefined , indicating no consensus.",
"This process is formally described by Equation 3: l i = (cid:40) arg max l L NN ( l, L i ) if N ( l, L i ) 2 hard_undefined , otherwise.",
"Finally, when the human classifier is confident about its classification of x i ( c i = 1 ), but it makes a mistake ( y i (cid:54) = y i ), we update the label l i of x i to discrepant .",
"It is easy to see that this update step will be executed only if l i was previously set to regular , i.e., it will not overwrite a neutrality label.",
"Equation 4 defines the discrepancy update step: l i = discrepant if y i (cid:54) = y i and c i = 1 .",
"Data Set.",
"We collected movie reviews from Metacritic, 2 which can be authored by regular users and experts , i.e., people working in the movie industry or important communication channels (e.g. The New York Times ).",
"In case of experts , the review provided by Metacritic is actually a short summary of the original review and, as we show in Section 5, 2 https://www.metacritic.com/movie this can be a problem for polarity classifiers.",
"Also, each experts review is associated with a score ranging from 0 to 100 , where scores from 0 to 39 are negative , from 40 to 60 are neutral , and from 61 to 100 are positive .",
"Differently, regular users reviews are produced by any person that has an account and are associated with a score ranging from 0 to 10 , where scores between 0 and 3 are negative , between 4 and 6 are neutral , and over 7 are positive .",
"As previously mentioned, the meaning of each rating is clearly conveyed to users in the Metacritic website.",
"Thus, class noise and biases should be rare in the dataset.",
"In total, we collected 415 , 867 reviews for 8 , 170 different movies, where 227 , 348 of those are from regular users and 188 , 519 from experts .",
"Our data collection was executed using the following steps.",
"First, we collected the most popular experts from the website, as provided by Metacritic.",
"Then, we generated a list of all movies reviewed by the top 10 experts.",
"From this list, which contains 8 , 170 movies, we collected all reviews from experts and regular users that were posted until August, 2018 .",
"For the purpose of this work, we avoided reviews that do not have a clear polarity ( neutral reviews), i.e., we only considered positive and negative reviews.",
"Hence, we selected a clean and unambiguous dataset.",
"Reviews from experts are usually shorter than from regular users , containing an average of 26 words (std. dev. of 13 ) against an average of 100 words (std. dev. of 129 ) for reviews by regular users .",
"In addition, we observed that experts use a more elaborate language.",
"Because of these differences, we will condition our analyses on the type of user ( experts or regular users ) and score polarity ( positive or negative ).",
"Machine Classifiers.",
"To evaluate the impact of hard instances on machine classifiers, we selected three state-of-the-art models with reported success in the task of polarity detection of movie reviews: BERT (Devlin et al., 2019), CNN-GRU (Wang et al., 2016) and C-LSTM (Zhou et al., 2015).",
"C-LSTM utilizes a Convolutional Neural Network (CNN) to extract a sequence of higher-level phrase representations, which are then fed into a Long Short-Term Memory (LSTM) unit to obtain the sentence representation.",
"CNN-GRU connects a character-aware CNN with a character-aware Gated Recurrent Unit (GRU) to learn long sequence semantics.",
"These two networks are initialized with pre-trained Word2vec vectors from Google News Dataset and have their final representations connected to a dense layer.",
"BERT uses a masked language model (MLM) to pre-train deep bidirectional representations from unlabeled text that considers both the left and right context of sentences and words.",
"In this work, we used an architecture composed by BERT embeddings pre-trained with data from Wikipedia connected with a dense layer.",
"For all architectures, the output y i is given by a sigmoid function.",
"For implementation and code details, please see the Appendix.",
"The first question we need to answer is: how many hard instances exist in movie reviews?",
"In the context of our Metacritic dataset D , the answer to this question can be influenced by two factors: (1) the type of user and (2) the polarity of their rating.",
"Thus, the following results are conditioned on whether the authors are experts or regular users and whether the reviews are positive or negative .",
"Because of that, we sampled a collection DH of 800 movie reviews from D that is both balanced in terms of user type and score polarity, i.e., this collection has 200 reviews for each of the four combinations of user type and score polarity.",
"In order to quantify the number of hard instances in DH , we use our proposed human classifier f H described in Section 3 to label every review x i DH .",
"Recall that f H assigns a polarity y i { positive , negative } to x i and, more important to our purpose here, a label l i , which can be either regular (instance is not a hard instance ), discrepant (the polarity of the text is different from the score polarity), or one of the four neutrality labels: mixed (text has mixed opinions), factual (text is purely factual), contextual (polarity needs context) and hard_undefined (reasons are unclear).",
"Also, let u i { expert , regular user } be the user type of the author of review x i .",
"Our goal with the following results is to estimate the probability P ( l i = l | y i = y, u i = u ) for the four combinations of score polarity y and user type u .",
"In Table 1, we show the number and proportion of movie reviews that are or are not hard instances for experts .",
"From the 400 labeled reviews, almost one quarter ( 92 ) are hard instances .",
"From those, note that neutral reviews are more common than discrepant ones, but while the first is equally present in both positive and negative reviews, dis-label ( l i ) positive negative total experts regular 146(36 . 5%) 162(40 . 5%) 77 % discrepant 20(5%) 3(0 . 8%) 5 .",
"crepant instances are significantly more present in positive reviews.",
"In such cases, the author gave a positive score to the movie, but its review demonstrates the opposite sentiment.",
"This often occurs when the expert is using the review to justify a good, but far from perfect score, to a critically acclaimed movie.",
"As for the neutral reviews, the most predominant type is contextual ( 6 . 8% ), followed by mixed ( 4 . 3% ) and factual ( 4 . 3% ).",
"Also, contextual instances are more common in negative reviews, when experts often use figures of speech (e.g. irony) together with external knowledge to create humour.",
"Finally, factual instances are more present in positive reviews, where the experts simply describe some characteristic of the movie that impressed them without explicitly saying that.",
"Also, in Table 1 we show the number and proportion of movie reviews that are or are not hard instances for regular users .",
"First, note that the number of reviews that are hard instances significantly decreased in comparison with the ones written by experts .",
"From the 400 labeled reviews, only 36(9%) are hard instances , of which 31 are neutral and only 5 are discrepant .",
"Different from what was verified for experts , the most predominant label for regular users was mixed , which occurred significantly more in positive reviews.",
"For the other labels, their occurrences were fairly balanced between negative and positive reviews.",
"We observed that regular users use a much more direct and simple language to state their opinions than experts .",
"Because of that, most of the hard instances are concentrated in cases where the author lists both the negative and positive aspects of the movie without stating their final opinions about the movie, which is the definition of mixed .",
"A note about the human classifier.",
"Because we used three human annotators in f H and a majority vote function, only two annotators were used initially.",
"The third annotator was called to classify x i if, and only if, the first two had any kind of disagreement, i.e., a disagreement regarding the polarity y i , the confidence c i , or label l i .",
"For the first two annotators, they agreed on 91 .",
"13% of the polarity scores, on 90 .",
"5% of their confidence levels and on 88% of their labels.",
"Regarding the third annotator, only 1 .",
"5% of the instances were not in total agreement with at least one of the other annotators.",
"The Cohen's kappa coefficient for the first two annotators was 0 .",
"82 in relation to polarity scores, 0 .",
"58 regarding their confidence levels and 0 .",
"49 regarding their attribute noise labels.",
"In this section, we quantify the impact of hard instances in machine classifiers.",
"Also, by putting these results in perspective with what was achieved by the human classifier, we hope to provide an accurate assessment on how distant machine classifiers are with respect to human performance.",
"We guide our analyses by the following questions:",
"1. What are the probabilities of a correct and a misclassification given the label l ?",
"In other words, we want to estimate the probabilities P ( y i = y i | l i = l ) and P ( y i (cid:54) = y i | l i = l ) for all labels l L .",
"2. What are the probabilities of label l given that the classifier was correct and that it made a mistake?",
"In other words, we want to estimate the probabilities P ( l i = l | y i (cid:54) = y i ) and P ( l i = l | y i = y i ) for all labels l L .",
"To address these questions, we test the three classifiers described in Section 4 in the labeled dataset DH (see Section 5.1), which contains 800 reviews.",
"Because this dataset is completely balanced, we created two balanced training datasets, one containing solely reviews from experts , namely D expertsT , and another containing solely reviews from regular users , namely D usersT .",
"Each dataset contains 8 , 796 reviews, 4 , 398 of each polarity.",
"Again, this dataset is solely used to train the machine classifiers.",
"Because these classifiers are sensitive to initialization parameters, we trained and tested them 5 times and the corresponding error bars are shown in Figure",
"2. Finally, recall that y i refers to the author's original polarity score (gold polarity) and y i refers to the polarity predicted by the classifiers, including the human classifier.",
"Figure 2 shows the classification error (with their respective error bars) for all classifiers in DH .",
"The classification error is simply the proportion of instances that were misclassified.",
"Each bar is also colored according to the labels' proportion in the misclassified instances.",
"For each classifier, the left (right) bar shows the error with respect to positive ( negative ) instances.",
"In general, the human classifier was the one that achieved the smallest error, followed by BERT and C-LSTM.",
"Also, the errors are always higher for experts , as these reviews have significantly less words (see Section 4) and more hard instances (see Section 5.1).",
"The latter is also one of the main reasons for the error being almost always higher for positive instances than for negative instances.",
"For expert reviews, while negative instances always have more regular instances, positive instances have almost twice more hard instances , particularly discrepant ones.",
"For regular user reviews, positive instances also have more hard instances , but the difference in terms of neutral reviews is more significant.",
"Note that, for both user types, this difference in the instances misclassified by the human classifier is striking.",
"For a more precise assessment of the impact of hard instances , we show in Table 2 the accuracy of the classifiers considering instances of each label separately.",
"In other words, these results provide estimates for the probabilities of our first question, P ( y i = y i | l i = l ) and P ( y i (cid:54) = y i | l i = l ) .",
"First, note that for all classifiers the accuracy significantly degrades in neutral instances and get even worse in discrepant instances.",
"Recall that a discrepant review is a review where the human classifier was sure about its polarity, but the originally assigned polarity is the opposite.",
"Thus, by definition, the human classifier accuracy on discrepant reviews is zero.",
"For neutral instances, the human classifier always outperforms the machine classifiers.",
"However, the machine classifiers are not always tricked by discrepant reviews as the human classifier is, although their performances are not better than a coin toss.",
"Considering the specific neutral labels, note that BERT achieves human level performance for contextual , which is coherent with the nature of this classifier, given that its embeddings are supposed to carry much more contextual information in comparison with the embeddings used in C-LSTM and CNN-GRU.",
"The most inconclusive results refer to hard_undefined , which is also the label with the least instances, 12 out of 800 .",
"To answer our second question, related to the probabilities P ( l i = l | y i (cid:54) = y i ) and P ( l i = l | y i = y i ) , we sample an additional dataset D errorH to be labeled by our human classifier f H .",
"First, we run the BERT classifier, which was the one that achieved the best results, on two new balanced sets of reviews extracted from D , one containing 2 , 752 reviews from experts and the other 2 , 752 reviews from regular users .",
"Again, we used the same BERT classifiers that were trained for generating the results in Figure 2, one for each user type.",
"After running BERT, we construct D errorH by sampling 100 misclassified and 100 correctly classified instances authored by each user type, for a total of 400 reviews.",
"Then, we run f H on D errorH to have a more accurate estimate of P ( l i = l | y i (cid:54) = y i ) and P ( l i = l | y i = y i ) .",
"Table 3 shows the percentages of each label for correctly and incorrectly classified instances, which provide estimates for the probabilities of P ( l i = l | y i (cid:54) = y i ) and P ( l i = l | y i = y i ) .",
"For both experts and regular users , it is much more likely to find neutral and discrepant reviews in misclassified instances.",
"In other words, one easy way to find hard instances in movie reviews is to run BERT and sample from misclassified instances.",
"Our estimates for the probabilities of finding a misclassified hard instance is 0 .",
"64 for experts and 0 .",
"56 for regular users .",
"In other words, more than 50% of our sampled misclassified instances are hard instances .",
"Recall from Table 1 that we found only 23% of hard instances in reviews from experts and only 9% in reviews from regular users in our first balanced sample DH .",
"The most striking difference is for discrepant reviews, where the number of instances increased by one order of magnitude in misclassified instances.",
"Regarding the neutral labels, our results reveal that we are at least twice as likely to find contextual instances in misclassified expert reviews and mixed instances in misclassified regular users reviews.",
"Therefore, to find hard instances with high probability, we propose to train and run BERT in the data (without filtering anything) and, from the misclassified instances, run the human classifier to identify them.",
"We investigated misclassified regular instances and found two patterns that explain the errors.",
"First, reviews that have positive and negative points, but where humans can easily identify what side has the most weight.",
"Second, reviews that have some irony that is clear to humans, but is created using words with the opposite polarity of the final score y i .",
"For examples, see Table 6 in the Appendix.",
"We conjecture that these instances can be correctly classified with extra training and more modern (and complex) architectures.",
"On the other hand, we feel that dealing with hard instances is not that simple, where more guided and focused approaches are probably needed, such as the one proposed by Valdivia et al. (2019).",
"They proposed an approach to combine reviews with scores for an aggregated polarity, which can be a good idea to deal with hard instances .",
"Overview of our results.",
"Our first goal was to quantify the expected amount of hard instances in misclassifications, which is 56% for regular users and 64% for experts .",
"Note that even though the reviews for these users are intrinsically different, the values are similar.",
"The second goal was to quantitatively show how different the two types of hard instances are.",
"Table 1 shows that neutral instances are common, and Table 3 shows they might have a significant presence even in correctly classified instances.",
"Contrastingly, discrepant instances are rare, particularly among correctly classified instances.",
"Given that our ultimate goal was to quantify and explain the reasons behind misclas-sifications, from Table 3 we can say that most of the mistakes ( 60% ) occur because of neutral ( 38% ) and discrepant ( 22% ) instances.",
"In this work, we propose a methodology to characterize, quantify and measure the impact of hard instances in the task of polarity classification of movie reviews.",
"We characterized such instances into two disjoint categories: neutrality and discrepancy .",
"We provided empirical evidence about the need to pay attention to such instances, as they are much harder to be classified, for both machine and human classifiers.",
"The main hypothesis of this work is that hard instances can make polarity classifiers fail.",
"To demonstrate this hypothesis, we provided two well defined types of hard instances , which are based on human reasoning, and a methodology to find them in labeled data.",
"With that, one can quantify how many instances of those types there are in their data, which can shed light on why and when classifiers fail.",
"We collected a noise-free (no class noise ) and well separated (no neutral polarity) dataset and showed that even in such a dataset most of the mistakes made by a state of the art classifier, namely BERT, are in our defined hard instances .",
"Observe in Table 3 that more than 50% of our sampled misclassified instances are hard instances ( discrepant or neutral ).",
"Our methodology works for every type of supervised classification task.",
"Because our proposed labels are defined from the perspective of a classifier fully capable of human-reasoning, they are easy to interpret and can be generalized to every classification task (e.g. polarity, image, song genre, topic) that humans are able to do.",
"After employing our methodology, it will be possible to differentiate mistakes that come from hard instances , which are those even humans cannot classify with confidence (or at all), and mistakes that could be solved by improving the classifier architecture.",
"In short, our proposed methodology can help quantify and explain why classifiers are making mistakes.",
"We made the dataset containing the labels publicly available 3 so it can be used as a standard benchmark for robustness to hard instances in polarity classification tasks, and to potentially foster research on models, datasets and evaluation metrics tailored for this problem.",
"This work is supported by the authors' individual grants from FAPEMIG, CAPES and CNPq.",
"We also thank all the reviewers for their thoughtful comments which helped to improve this work."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"objective",
"objective",
"objective",
"method",
"other",
"other"
] |
[
"Relational triple extraction is a crucial task for knowledge graph construction.",
"Existing methods mainly focused on explicit relational triples that are directly expressed, but usually suffer from ignoring implicit triples that lack explicit expressions.",
"This will lead to serious incompleteness of the constructed knowledge graphs.",
"Fortunately, other triples in the sentence provide supplementary information for discovering entity pairs that may have implicit relations.",
"Also, the relation types between the implicitly connected entity pairs can be identified with relational reasoning patterns in the real world.",
"In this paper, we propose a unified framework to jointly extract explicit and implicit relational triples.",
"To explore entity pairs that may be implicitly connected by relations, we propose a binary pointer network to extract overlapping relational triples relevant to each word sequentially and retain the information of previously extracted triples in an external memory.",
"To infer the relation types of implicit relational triples, we propose to introduce real-world relational reasoning patterns in our model and capture these patterns with a relation network.",
"We conduct experiments on several benchmark datasets, and the results prove the validity of our method.",
"Relational triple extraction is defined as automatically recognizing semantic relations with triple structures ( subject , relation , object ) among multiple entities in a sentence.",
"It is a critical task for constructing Knowledge Graphs (KGs) from unlabeled corpus (Dong et al., 2014).",
"Early work of relational triple extraction applied pipeline methods (Zelenko et al., 2003; Chan and Roth, 2011), which ran entity recognition and relation classification separately.",
"However, such pipeline approaches suffered from error propagation.",
"To address this issue, recent work proposed to jointly extract entity and relations from the text Work for Live in Locate in Mark Spencer, a designer of Digium, a company in Huntsville Person Organization Location Explict Triple Implicit Triple Figure 1: An example of explicit and implicit relational triples.",
"with feature-based methods (Yu and Lam, 2010; Li and Ji, 2014; Ren et al., 2017).",
"Afterward, neural network-based models were proposed to eliminate hand-crafted features (Gupta et al., 2016; Zheng et al., 2017).",
"More recently, several methods were proposed to extract overlapping triples, such as tagging-based (Dai et al., 2019; Wei et al., 2020), graph-based (Wang et al., 2018; Fu et al., 2019), copy-based (Zeng et al., 2018, 2019, 2020) and token pair linking models (Wang et al., 2020).",
"Existing models achieved considerable success on extracting explicit triples which have direct relational expressions in the sentence.",
"However, there are many implicit relational triples that are not explicitly expressed.",
"For example, in Figure 1, the explicit triples are strongly indicated by the key relational phrases, but the implicit relation Live in is not expressed explicitly.",
"Unfortunately, existing methods usually ignored implicit triples (Zhu et al., 2019), which will cause serious incompleteness of the constructed KGs and performance degradation of downstream tasks (Angeli and Manning, 2013; Jia et al., 2020; Jun et al., 2020).",
"Our work is motivated by several observations.",
"First, other relational triples within a sentence provide supplementary information for discovering entity pairs that may have implicit relational connections.",
"For example, in Figure 1, the explicit triples establish a relational connection between Mark Spencer and Huntsville through the intermediate entity Digium .",
"Second, the relation types of implicit relation triples can be derived through real-world reasoning patterns.",
"For example, in Figure 1, the reasoning pattern one lives where the company he works for is located helps identify the type of the implicit triple as Live in .",
"In this paper, we propose a unified framework for the joint extraction of explicit and implicit relational triples.",
"We propose a Binary Pointer Network (BPtrNet), which is based on the pointer network (Vinyals et al., 2015), to extract overlapping relational triples relevant to each word sequentially.",
"To discover implicitly connected entity pairs, we preserve the information of previously extracted triples in an external memory and use it to enhance the extraction of later time steps.",
"To infer the relation types between the implicitly connected entity pairs, we propose to augment our model with real-world relational reasoning patterns and capture the relational inference logic with a Relation Network (RN) (Santoro et al., 2017).",
"The RN obtains a pattern-enhanced representation from the memory for each word pair.",
"Then the Reasoning pattern enhanced BPtrNet (R-BPtrNet) uses the word pair representation to compute a binary score for each candidate triple.",
"Finally, triples with positive scores are output as the extraction result.",
"The main contributions of this paper are: We propose a unified framework to jointly extract explicit and implicit relational triples.",
"To discover entity pairs that are implicitly connected by relations, we propose a BPtrNet model to extract overlapping relational triples sequentially and utilize an internal memory to retain the extracted triples.",
"To enhance the relation type inference of implicitly connected entity pairs, we propose to introduce relational reasoning patterns, captured with a RN, to augment our model.",
"We conduct experiments on several benchmark datasets and the experimental results demonstrate the validity of our method.",
"Early work of relational triple extraction addressed this task in a pipelined manner (Zelenko et al., 2003; Zhou et al., 2005; Chan and Roth, 2011; Gormley et al., 2015).",
"They first ran named entity recognition to identify all entities and then classi-fied relations between all entity pairs.",
"However, these pipelined methods usually suffered from error propagation problem and failed to capture the interactions between entities and relations.",
"To overcome these drawbacks, recent research focused on jointly extracting entities and relations, including feature-based models (Yu and Lam, 2010; Li and Ji, 2014; Ren et al., 2017) and neural network-based models (Gupta et al., 2016; Miwa and Bansal, 2016; Zheng et al., 2017).",
"For example, Ren et al. (2017) proposed to jointly embed entities, relations, text features and type labels into two low-dimensional spaces.",
"Miwa and Bansal (2016) proposed a joint model containing two long-short term memories (LSTMs) (Gers et al., 2000) with shared parameters.",
"Zheng et al. (2017) proposed to extract relational triples directly by transforming this task into a sequence tagging problem, whose tags contain the information of entities and the relations they hold.",
"However, they only assigned one label for each word, which means that this method failed to extract overlapping triples.",
"Subsequent work proposed several mechanisms to solve this problem: (1) labeling tagging sequences for words (Dai et al., 2019) or entities (Yu et al., 2019; Wei et al., 2020); (2) transforming the sentence into a graph structure (Wang et al., 2018; Fu et al., 2019); (3) generating triple element sequences with copy mechanism (Zeng et al., 2018, 2019, 2020; Nayak and Ng, 2020); (4) linking token pairs with a handshake tagging scheme (Wang et al., 2020).",
"However, these methods usually ignored implicit relational triples that are not directly expressed in the sentence (Zhu et al., 2019), thus will lead to the incompleteness of the resulting KGs and negatively affect the performance of downstream tasks (An-geli and Manning, 2013; Jia et al., 2020).",
"Our work is motivated by two observations.",
"First, other triples in the sentence provide supplementary evidence for discovering entity pairs with implicit relational connections.",
"Second, the relation types of the implicit connections need to be identified through real-world reasoning patterns.",
"In this paper, we propose a unified framework for the joint extraction of explicit and implicit relational triples.",
"We propose a binary pointer network to sequentially extract overlapping relational triples and externally keep the information of predicted triples for exploring implicitly connected entity pairs.",
"We also propose to introduce real-world reasoning patterns in our model to help derive the relation type of implicit triples with a relation network.",
"Experimental results on several benchmark datasets demonstrate the effectiveness of our method.",
"Figure 2: The overall framework of our approach.",
"The overall framework of our approach is shown in Figure 2.",
"We introduce the Binary Pointer Network (BPtrNet) and the Relation Network (RN) in Section 3.1 and 3.2 and the details of training and inference in Section 3.3, respectively.",
"Existing methods usually failed to extract implicit relational triples due to the lack of explicit expressions (Zhu et al., 2019).",
"Fortunately, we observe that other triples in the sentence can help discover entity pairs that may have implicit relational connections.",
"For instance, in the sentence George is Judy's father and David's grandfather , the relation between Judy and David is not explicitly expressed.",
"In this case, if we first extract the explicit triple ( Judy , father , George ) and keep its information in our model, we can easily establish an implicit connection between Judy and David through George because George is explicitly connected with David by the relational keyword grandfather .",
"Inspired by this observation, our model extracts relational triples relevant to each word sequentially and keeps all previous triples of this sentence to enhance the extraction at future time steps.",
"This word-by-word extraction process can be regarded as transforming a text sequence into a sequence of extracting actions, which leads us to a sequence-to-sequence (seq2seq) model.",
"Therefore, we propose a Binary Pointer Network (BPtrNet), based on a seq2seq pointer network (Vinyals et al., 2015), to jointly extract explicit and implicit relational triples.",
"Our model first encodes the words of a sentence into vector representations (Section 3.1.1).",
"Then, we use a binary decoder to sequentially transform the vectors into (overlapping) relational triples (Section 3.1.2).",
"We also introduce an external memory to retain previously extracted triples for enhancing future decoding steps (Section 3.1.3).",
"Given a sentence [ w 1 , . . . , w n ] , we first capture morphological patterns of entities with a convolutional neural network (CNN) (LeCun et al., 1989) and compute the character representation c i of the word w i ( i = 1 , . . . , n ): c i = CNN ( w i ; ) R d c .",
"Then we introduce the context-sensitive representations p 1: n captured with a pre-trained Language Model (LM) to bring rich semantics and prior knowledge from the large-scale unlabeled corpus.",
"We feed c i , p i and the word embedding w i into a bidirectional LSTM (BiLSTM) to compute the con-textualized word representations x 1: n and encode the sentence with another BiLSTM: x i = BiLSTM in ([ w i ; p i ; c i ]) R d in .",
"First, to capture the interactions between entities and relations, we recognize the entities with a span-based entity tagger (Yu et al., 2019; Wei et al., 2020) and transform the tags into vectors as part of the decoder's input (Figure 2).",
"Specifically, we assign each token a start and end tag to indicate whether the current token corresponds to a start or end position of an entity of a certain type: h Ti = BiLSTM Tag ( x i ) R d T p ( y s/ei ) = softmax ( W s/e h Ti + b s/e ) tag s/ei = arg max k p ( y s/ei = k ) (2) where ( W s , b s ) and ( W e , b e ) are parameters of the start and end tag classifiers, respectively.",
"Then we obtain the entity tag embedding e i R d e by averaging the look-up embeddings of the start and end tags.",
"We also capture a global contextual embedding g by max pooling over h E 1: n .",
"Then we adopt a LSTM as the decoder ( h D 0 = h En ): h Di = LSTM Dec ([ x i ; e i ; g ] , h Di 1 ) R 2 d E .",
"(3) Next, we introduce how to extract relational triples at the i -th time step.",
"We consider the current word as the object entity, select words as subjects that form triples with the object from all the words of the sentence, and predict the relation types between the subjects and the object.",
"For example, in Figure 2, when the current object is Huntsville , the model selects Digium as the subject and clas-sifies the relation as Locate in .",
"Thus ( Digium , Locate in , Huntsville ) is extracted as a relational triple.",
"Multi-token entities are represented with their last words and recovered by finding the nearest start tags of the same type from their last positions.",
"However, the original softmax pointer in (Vinyals et al., 2015) only allows an object to point to one subject, thus fails to extract multiple triples with overlapping objects.",
"To address this issue, we propose a binary pointer, which independently computes a binary score for each subject to form a relational triple with the current object under each relation type.",
"Our method naturally solves the overlapping triple problem by producing multiple positive scores at one step (Figure 2).",
"We formulate the score of the triple ( w j , r , w i ) as: s ( r ) ji = ( r (cid:62) ( W ptr [ h Ej ; h Di ] + b ptr )) , (4) and extract this candidate triple as a relational triple if s ( r ) ji is higher than some threshold, such as 0 .",
"in our model ( i, j = 1 , . . . , n ) .",
"and are the sigmoid and tanh functions, respectively.",
"r R d R is the type embedding of the relation r .",
"W ptr and b ptr are parameters of the binary pointer.",
"We introduce an external memory M to keep the previously extracted triples of the sentence.",
"We first initialize M as an empty set.",
"After the decoder's extraction process at the i -th time step, we represent the extracted triple t = ( w s t , r t , w i ) as: h Mt = [ h Es t ; r t ; h Ei ] R d M .",
"where N i is the number of the currently extracted triples and N = (cid:80) ik =1 N k .",
"Note that we set and update the external memory for each sentence independently, and the memory stores only the triple representations of one single sentence.",
"Thus triples of other sentences will not be introduced into the sentence currently being extracted.",
"Finally, the triples in the memory are utilized to obtain the reasoning pattern-enhanced representations for future time steps, as described in Section 3.2.",
"Relation types of implicit relational triples are dif-ficult to infer due to the lack of explicit evidence, thus need to be derived with real-world relational reasoning patterns.",
"For example, in the sentence George is Judy's father and David's grandfather , the relation type between Judy and David can be inferred as father using the pattern father 's father is called grandfather .",
"Based on this fact, we propose to enhance our model by introducing real-world relational reasoning patterns.",
"We capture the patterns with a Relation Network (RN) (Santoro et al., 2017), a neural network module specially designed for relational reasoning.",
"A RN is essentially a composite function over a relational triple set T : RN ( T ) = f (cid:0) { g ( t ) } t T (cid:1) , where f is an aggregation function and g projects a triple into a fixed-size embedding.",
"We set the memory M as the input relational triple set T and utilize the RN to learn a pattern-enhanced representation h Pji for the word pair ( w j , w i ) at the i -th time step.",
"First, the g reads the triple representations from M and projects them with a fully-connected layer: g ( t ) = ( W h Mt + b ) R d P .",
"Then f selects useful triples with a gating network 1 : u t = (cid:0) g ( t ) U [ h Ej ; h Di ] (cid:1) R , and aggregates the selected triples with the word pair to compute h Pji using another fully-connected layer:",
"Finally, we modify Equation 4 as s ( r ) ji = ( r (cid:62) h Pji ) to compute the binary scores of candidate triples.",
"We denote our Reasoning pattern enhanced BPtrNet model as R-BPtrNet.",
"Note that we use quite simple formulas for f and g because our contribution focuses on the effectiveness of introducing relational reasoning patterns for this task rather than the model structure.",
"Exploration for more complex structures will be left for future work.",
"We calculate the triple loss of a sentence as a binary cross entropy over valid candidate triples T v , whose subject and object are different entities (or the end words of different entities):",
"where s t is the score of the candidate triple t , y t = 1 for gold triples and 0 for others.",
"We also train the entity tagger with a cross-entrory loss: L e = 1 n n (cid:88) i =1 (cid:88) { s,e } log p ( y i = y i ) (10) where y s/e i are the gold start and end tags of the i -th word, respectively.",
"Finally, we train the R-BPtrNet with the joint loss L = L t + L e .",
"To prevent error propagation, we use the gold entity tags to filter out valid candidate triples and compute the tag embeddings e 1: n during training.",
"We also update the memory M with the gold relational triples.",
"During inference, we extract triples from scratch and use the predicted entity tags and relational triples instead of the gold ones.",
"1 We don't use the more common attention mechanism (Bahdanau et al., 2015) to select triples because the attention weights are restricted to sum to 1.",
"If all triples in the memory are useless, they will still be assigned a large weight due to the restriction, which will confuse the model.",
"We evaluate our method on two benchmark datasets.",
"NYT (Riedel et al., 2010) consists of sentences from the New York Times corpus and contains 24 relation types.",
"WebNLG (Gardent et al., 2017) was created for natural language generation task.",
"It contains 171 relation types 2 and was adopted for relational triple extraction by (Zeng et al., 2018).",
"We split the sentences into three categories: Normal , SingleEntityOverlap ( SPO ) and EntityPairOverlap ( EPO ) following Zeng et al. (2018).",
"The statistics of the two datasets are shown in Table 1.",
"Following previous work (Zeng et al., 2018; Wei et al., 2020; Wang et al., 2020), an extracted relational triple is regarded as correct only if the relation and the heads of both subject and object are all correct.",
"We report the standard micro precision, recall, and F 1 -scores on both datasets.",
"We determine the hyper-parameters on the validation sets.",
"We use the pre-trained GloVe (Penning-ton et al., 2014) embeddings as w .",
"We adopt a one-layer CNN with d c = 30 channels to learn c from 30-dimensional randomly-initialized character embeddings.",
"We choose the state-of-the-art RoBERTa LARGE (Liu et al., 2019) model 3 as the pre-tained LM.",
"For a fair comparison with previous methods, we also conduct experiments and report the scores with BERTBASE (Devlin et al., 2019).",
"We set d in (Equation",
"1) as 300.",
"The hidden dimensions of the encoder d E and the entity tagger d T are both 200 .",
"The dimensions of entity tag embeddings d e and relation type embeddings d R are set as 50 and 200 , respectively.",
"The projection dimension d P of the RN is set as 500 .",
"We add 10% dropout (Srivastava et al., 2014) on the input of all LSTMs for regularization.",
"Following previous work (Zeng et al., 2018; Wei et al., 2020; Wang et al., 2020), we set the max length of input sentences to 100.",
"We use the Adam optimizer (Kingma and Ba, 2014) to fine-tune the LM and train other parameters with the learning rates of 10 5 and 10 3 , respectively.",
"We train our model for 30/90 epochs with the batch size as 32/8 on NYT/WebNLG.",
"At the beginning of the last 10 epochs, we load the parameters with the best validation performance and divide the learning rates by ten.",
"Finally, we choose the best model on the validation set and output results on the test set.",
"We present our results on the NYT and WebNLG test sets in Table 2 and compare them with several previous state-of-the-art models:",
"NovelTagging (Zheng et al., 2017) transformed this task into a sequence tagging problem but neglected the overlapping triples.",
"CopyRE (Zeng et al., 2018) proposed a seq2seq model based on the copy mechanism to generate triple element as sequences.",
"CopyRE RL (Zeng et al., 2019) proposed to learn the extraction order of CopyRE with Reinforcement Learning (RL).",
"GraphRel (Fu et al., 2019) proposed a graph convolutional network for this task.",
"ETL-Span (Yu et al., 2019) proposed a decomposition-based tagging scheme.",
"CopyMTL (Zeng et al., 2020) proposed a Multi-Task Learning (MTL) framework based on CopyRE to address multi-token entities.",
"WDec (Nayak and Ng, 2020) proposed an encoder-decoder architecture for this task.",
"CGT UniLM (Ye et al., 2020) proposed a generative transformer module with a triple contrastive training object.",
"CASREL (Wei et al., 2020) proposed a cascade binary tagging framework.",
"TPLinker (Wang et al., 2020) proposed a one-stage token pair linking model with a novel handshaking tagging scheme.",
"From Table 2 we have the following observations: (1) The R-BPtrNet significantly outperforms all previous non-LM methods.",
"It demonstrates the superiority of our seq2seq-based framework to jointly extract explicit and implicit relational triples and improve the performance for this task.",
"Additionally, the R-BPtrNet produces competitive performance to the BERT-based baseline models Method NYT WebNLG Nor.",
"without using BERT.",
"It shows that the improvements of our model come not primarily from the pre-trained LM representations, but from the introduction of relational reasoning patterns to this task.",
"(2) R-BPtrNet BERT outperforms BERT-based baseline models.",
"It indicates that our method can effectively extract implicit relational triples with the assistance of the triple-retaining external memory and the pattern-capturing RN.",
"(3) R-BPtrNet RoBERTa further outperforms R-BPtrNet BERT and other baseline methods.",
"It indicates that the more powerful LM brings more prior knowledge and real-world relational facts, enhancing the model's ability to learn real-world relational reasoning patterns.",
"To demonstrate the ability of our model in handling the multiple triples and overlapping triples of a sentence, we split the test sets of NYT and WebNLG datasets according to the overlapping",
"patterns and the number of triples.",
"We conduct further experiments on these subsets and report the results in Table 3, from which we can observe that: (1) The R-BPtrNet RoBERTa and R-BPtrNet BERT both significantly outperform previous models on the SPO and EPO subsets of NYT and WebNLG datasets.",
"It proves the validity of our method to address the overlapping triple problem.",
"Moreover, we find that implicit relational triples usually overlap with others.",
"Therefore, the improvements on the overlapping subsets also validate the effectiveness of our method for extracting implicit relational triples.",
"(2) R-BPtrNet RoBERTa and R-BPtrNet BERT both bring improvements to sentences with multiple triples compared to baseline models.",
"It indicates that our method can effectively extract multiple relational triples from a sentence.",
"Furthermore, we observe more significant improvements when the number of triples grows.",
"We hypothesize that this is because implicit relational triples are more likely to occur in sentences with more triples.",
"Our model extracts the implicit relational triples more correctly and improves the performance.",
"We run an ablation study to investigate the contribution of each component in our model to the implicit relational triples.",
"We manually select 134 sentences with rich implicit triples from the NYT test set 4 .",
"We conduct experiments on the subset 4 We first select sentences that contain at least two overlapping relational triples.",
"For example, if a sentence contains entity A , B and C , and if A B and B C exists or A B and A C exists, this sentence is selected.",
"Note that A B and B A are counted as one triple during this selecting procedure.",
"Then, we manually check all selected sentences and keep the ones with implicit relational triples.",
"using the following ablation options: R-BPtrNet RoBERTa and R-BPtrNet BERT are the full models using RoBERTa LARGE and BERTBASE as LMs, respectively.",
"R-BPtrNet removes the pre-trained LM representations from the full model.",
"BPtrNet removes the RN from the R-BPtrNet.",
"Under this setting, we feed a gated summation of the memory into the decoder's input of the next time step.",
"BPtrNet NoMem removes the external memory from the BPtrNet, which means that the previously extracted triples are not retained.",
"We compare the performance of these options with the previous BERT-based models.",
"We also analyze the performance on predicting only the entity pairs and the relations, respectively.",
"We illustrate the results in Figure 3, from which we can observe that: (1) BPtrNet NoMem produces comparable results to the baseline models.",
"We speculate that it benefits from the seq2seq structure and the previous triples are embedded into the decoder's hidden states.",
"(2) BPtrNet brings huge improvements over the BPtrNet NoMem to the entity pair and the triple F 1 scores.",
"It indicates that the external memory effectively helps discover entity pairs that have implicit relational connections by retaining previously extracted triples.",
"(3) R-BPtrNet brings significant improvements over the BPtrNet to the relation and the triple F 1 scores.",
"It indicates that the RN effectively captures the relational reasoning patterns and enhances the relation type inference of implicit relations.",
"(4) The pre-trained LMs only bring minor improvements.",
"It proves that the effectiveness of our model comes primarily from the external memory and the introduction of relational reasoning patterns rather than the pre-trained LMs.",
"Figure 4 shows the comparison of the best previous model TPLinker BERT and our R-BPtrNet BERT model on three example sentences from the implicit subset in Section 4.5.",
"The first example contains the transitive pattern of the relation contains .",
"The second example contains a multi-hop relation path pattern between Chad Hurley and Google through the intermediate entity Youtube .",
"The third example contains a composite pattern between the siblings Crowley and Edwin Edwards with a common ancestor Edmund .",
"We can observe that the TPLinker BERT model fails to extract the implicit relational triples.",
"The R-BPtrNet BERT successfully captures various reasoning patterns in the real world and effectively extracts all the implicit relational triples in the examples.",
"In this paper, we propose a unified framework to extract explicit and implicit relational triples jointly.",
"To discover entity pairs that may have implicit relational connections, we propose a binary pointer network to extract relational triples relevant to each word sequentially and introduce an external memory to retain the extracted triples.",
"To derive the relation types of the implicitly connected entity pairs, we propose to introduce real-world relational reasoning patterns to this task and capture the reasoning patterns with a relation network.",
"We conduct experiments on two benchmark datasets, and the results prove the effectiveness of our method.",
"We would like to thank the anonymous reviewers for their constructive comments on this paper.",
"This work was supported by the National Natural Science Foundation of China under Grant numbers U1936208, U1936216, and 61862002."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"result",
"other",
"other"
] |
[
"Distant supervision for relation extraction provides uniform bag labels for each sentence inside the bag, while accurate sentence labels are important for downstream applications that need the exact relation type.",
"Directly using bag labels for sentence-level training will introduce much noise, thus severely degrading performance.",
"In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that the instance does not belong to these complementary labels.",
"Since the probability of selecting a true label as a complementary label is low, NT provides less noisy information.",
"Furthermore, the model trained with NT is able to separate the noisy data from the training data.",
"Based on NT, we propose a sentence-level framework, SENT, for distant relation extraction.",
"SENT not only filters the noisy data to construct a cleaner dataset, but also performs a relabeling process to transform the noisy data into useful training data, thus further benefit-ing the model's performance.",
"Experimental results show the significant improvement of the proposed method over previous methods on sentence-level evaluation and de-noise effect.",
"Relation extraction (RE), which aims to extract the relation between entity pairs from unstructured text, is a fundamental task in natural language processing.",
"The extracted relation facts can benefit various downstream applications, e.g., knowledge graph completion (Bordes et al., 2013; Wang et al., 2014), information extraction (Wu and Weld, 2010) and question answering (Yao and Van Durme, 2014; Fader et al., 2014).",
"supervision (Mintz et al., 2009) is proposed to gather training data through automatic alignment between a database and plain text.",
"Such annotation paradigm results in an inevitable noise problem, which is alleviated by previous studies using multi-instance learning (MIL).",
"In MIL, the training and testing processes are performed at the bag level, where a bag contains noisy sentences mentioning the same entity pair but possibly not describing the same relation.",
"Studies using MIL can be broadly classified into two categories: 1) the soft de-noise methods that leverage soft weights to differentiate the influence of each sentence (Lin et al., 2016; Han et al., 2018c; Li et al., 2020; Hu et al., 2019a; Ye and Ling, 2019; Yuan et al., 2019a,b); 2) the hard de-noise methods that remove noisy sentences from the bag (Zeng et al., 2015; Qin et al., 2018; Han et al., 2018a; Shang, 2019).",
"However, these bag-level approaches fail to map each sentence inside bags with explicit sentence labels.",
"This problem limits the application of RE in some downstream tasks that require sentence-level relation type, e.g., Yao and Van Durme (2014) and Xu et al. (2016) use sentence-level relation extraction to identify the relation between the answer and the entity in the question.",
"Therefore, several studies (Jia et al. (2019); Feng et al. (2018)) have made efforts on sentence-level (or instance-level) distant RE, empirically verifying the deficiency of bag-level methods on sentence-level evaluation.",
"However, the instance selection approaches of these methods depend on rewards(Feng et al., 2018) or frequent patterns(Jia et al., 2019) determined by bag-level labels, which contain much noise.",
"For one thing, one bag might be assigned to multiple bag labels, leading to difficulties in one-to-one mapping between sentences and labels.",
"As shown in Fig.1, we have no access to the exact relation between place of birth and employee of for the sentence Obama was born in the United States..",
"For another, the sentences inside a bag might not express the bag relations.",
"In Fig.1, the sentence Obama was back to the United States yesterday actually express the relation live in, which is not included in the bag labels.",
"In this work, we propose the use of negative training (NT) (Kim et al., 2019) for distant RE.",
"Different from positive training (PT), NT trains a model by selecting the complementary labels of the given label, regarding that the input sentence does not belong to this complementary label.",
"Since the probability of selecting a true label as a complementary label is low, NT decreases the risk of providing noisy information and prevents the model from overfitting the noisy data.",
"Moreover, the model trained with NT is able to separate the noisy data from the training data (a histogram in Fig.3 shows the separated data distribution during NT).",
"Based on NT, we propose SENT, a sentence-level framework for distant RE.",
"During SENT training, the noisy instances are not only filtered with a noise-filtering strategy, but also transformed into useful training data with a re-labeling method.",
"We further design an iterative training algorithm to take full advantage of these data-refining processes, which significantly boost performance.",
"Our codes are publicly available at Github 1 .",
"To summarize the contribution of this work: We propose the use of negative training for sentence-level distant RE, which greatly protects the model from noisy information.",
"We present a sentence-level framework, SENT, which includes a noise-filtering and a re-labeling strategy for re-fining distant data.",
"The proposed method achieves significant improvement over previous methods in terms of both RE performance and de-noise effect.",
"1 https://github.com/rtmaww/SENT 2 Related Work 2.1 Distant Supervision for RE Supervised relation extraction (RE) has been constrained by the lack of large-scale labeled data.",
"Therefore, distant supervision (DS) is introduced by Mintz et al. (2009), which employs existing knowledge bases (KBs) as source of supervision instead of annotated text.",
"Riedel et al. (2010) relaxes the DS assumption to the express-at-least-once assumption.",
"As a result, multi-instance learning is introduced (Riedel et al. (2010); Hoffmann et al. (2011); Surdeanu et al. (2012)) for this task, where the training and evaluating process are performed in bag-level , with potential noisy sentences existing in each bag.",
"Most following studies in distant RE adopt this paradigm, aiming to decrease the impact of noisy sentences in each bag.",
"These studies include the attention-based methods to attend to useful information ( Lin et al. (2016); Han et al. (2018c); Li et al. (2020); Hu et al. (2019a); Ye and Ling (2019); Yuan et al. (2019a); Zhu et al. (2019); Yuan et al. (2019b); Wu et al. (2017)), the selection strategies such as RL or adversarial training to remove noisy sentences from the bag (Zeng et al. (2015); Shang (2019); Qin et al. (2018); Han et al. (2018a)) and the incorporation with extra information such as KGs, multi-lingual corpora or other information (Ji et al. (2017); Lei et al. (2018); Vashishth et al. (2018); Han et al. (2018b); Zhang et al. (2019); Qu et al. (2019); Verga et al. (2016); Lin et al. (2017); Wang et al. (2018); Deng and Sun (2019); Beltagy et al. (2019)).",
"Other approaches include soft-label strategy for denoising (Liu et al. (2017)), leveraging pre-trained LM (Alt et al. (2019)), pattern-based method (Zheng et al. (2019)), structured learning method (Bai and Ritter (2019)) and so forth (Luo et al. (2017); Chen et al. (2019)).",
"In this work, we focus on sentence-level relation extraction.",
"Several previous studies also perform Distant RE on sentence-level.",
"Feng et al. (2018) proposes a reinforcement learning framework for sentence selecting, where the reward is given by the classification scores on bag labels.",
"Jia et al. (2019) builds an initial training set and further select confident instances based on selected patterns.",
"The difference between the proposed work and previous works is that we do not rely on bag-level labels for sentence selecting.",
"Furthermore, we leverage NT to dynamically separate the noisy data from R e l a t i o n C l a ss i f i e r Bag Level Instance Level Filter Confidence Confidence Re-label (3) Iteration R e l a t i o n C l a ss i f i e r Initialized (1) Negative Training Instances with relation labels A, B, C Filtered instances Instances re-labeled to label A, B, C (2) Noise Filtering & Re-labeling Figure 2: An overview of the proposed framework, SENT, for sentence-level distant RE.",
"Learning with noisy data is a widely discussed problem in deep learning, especially in the field of computer vision.",
"Existing approaches include robust learning methods such as leveraging a robust loss function or regularization method(Lyu and Tsang, 2020; Zhang and Sabuncu, 2018; Hu et al., 2019b; Kim et al., 2019), re-weighting the loss of potential noisy samples (Ren et al., 2018; Jiang et al., 2018), modeling the corruption probability with a transition matrix (Goldberger and Ben-Reuven, 2016; Xia et al.) and so on.",
"Another line of research tries to recognize or even correct the noisy instances from the training data(Malach and Shalev-Shwartz, 2017; Yu et al., 2019; Arazo et al., 2019; Li et al., 2019).",
"In this paper, we focus on the noisy label problem in distant RE.",
"We first leverage a robust negative loss (Kim et al., 2019) for model training.",
"Then, we develop a new iterative training algorithm for noise selection and correction.",
"In order to achieve sentence-level relation classification using bag-level labels in distant RE, we propose a framework, SENT, which contains three main steps (as shown in Fig.2): (1) Separating the noisy data from the training data with negative training (Sec.3.1); (2) Filtering the noisy data as well as re-labeling a part of confident instances (Sec.3.2); (3) Leveraging an effective training algorithm based on (1) and (2) to further boost",
"the performance (Sec.3.3).",
"Specifically, we denote the input data in this task as S = { ( s 1 , y 1 ) , . . . , ( s N , y N ) } , where y i R = { 1 , . . . , C } is the bag-level label of the i th input sentence s i .",
"Obviously, this is a noisy dataset drawn from a noisy distribution D because these bag-level labels y come from the distant label of each entity bag.",
"For each s i containing a pair of entities < e 1 , e 2 > , y i is one of the relation facts 2 that < e 1 , e 2 > participates in in the database.",
"Such annotation method indicates that y i is a potential noisy label for s i .",
"Here, we denote D as the real data distribution without noise, and the clean dataset drawn from D as S = { ( s 1 , y 1 ) , . . . , ( s N , y N ) } .",
"The ambition of this work is to find the best estimated parameters of the real mapping f : x y, ( x, y ) D based on the noisy data S .",
"We design three steps for achieving this goal: (1) Recognizing the set of noisy data S n from S using negative training, where S n = { ( s i , y i ) | y i (cid:54) = y i } .",
"(2) Refining S by noise-filtering and re-labeling, e.g., S refined = ( S \\ S n ) S n,relabeled , where S n,relabeled = { ( s i , y i ) | ( s i , y i ) S n } .",
"(3) Iteratively perform (1) and (2) so the refined dataset S refined approaches the real dataset S .",
"In order to perform robust training on the noisy distant data, we propose the use of negative Training (NT), which trains based on the concept that the input sentence does not belong to this complementary label.",
"We find that NT not only 2 Here, we randomly choose one of the multiple bag labels for injective relation classification.",
"Positive training (PT) trains the model towards predicting the given label, based on the concept that the input sentence belongs to this label.",
"Here, given any input s with a label y R = { 1 , 2 , . . . , C } , y { 0 , 1 } C is the C-dimension one-hot vector of y .",
"We denote p = f ( s ) as the probability vector of a sentence given by a relation classifier f ( ) .",
"With the cross entropy loss function, the loss defined in typical positive training is: LPT ( f, y ) = C (cid:88) k =1 y k log p k (1) where p k denotes the probability of the k th label.",
"Optimizing on Eq.1 meets the requirement of PL, as the probability of the given label approaches 1 with the loss decreasing.",
"In negative training (NT), for each input s with a label y R , we generate a complementary label y by randomly sampling from the label space except y , e.g., y R \\{ y } .",
"With the cross entropy loss function, we define the loss in negative training as: LNT ( f, y ) = C (cid:88) k =1 y k log(1 p k ) (2) Different from PT, Eq.2 aims to reduce the probability value of the complementary label, as p k 0 with the loss decreasing.",
"To further illustrate the effect of NT, we train the classifier with PT and NT respectively on a constructed TACRED dataset with 30% noise (details shown in Sec.4.1).",
"A histogram 3 of the training data after PT and NT is shown in Figs.",
"3(a),(b), which reveals that, when training with PT, the confidence of clean data and noisy data increase with no difference, resulting in the model to overfit noisy training data.",
"On the contrary, when training with NT, the confidence of noisy data is much lower than that of clean data.",
"This result confirms that the model trained with NT suffers less from overfitting noisy data with less noisy information provided.",
"Moreover, as the confidence value of clean data and noisy data separate from each other, we are able to filter noisy data with a certain threshold.",
"Fig.4 shows the details of the data-filtering effect.",
"After the first iteration of NT, a modest threshold contributes to 97% precision noise-filtering with about 50% recall, which further verifies the effectiveness of NT on noisy data training.",
"In Section 3.1, we have illustrated the effectiveness of NT on training with noisy data, as well as the capability to recognize noisy instances.",
"While filtering noisy data is important for training on distant data, these filtered data contain useful information that can boost performance if properly re-labeled.",
"In this section, we describe the proposed noise-filtering and label-recovering strategy for refining distant data based on NT.",
"As discussed before, it is intuitive to construct a filtering strategy based on a certain threshold after NT.",
"However, in distant RE, the long-tail problem cannot be neglected.",
"During training, the 3 When drawing the histogram, we omitted the large amount of NA-class data (80% of the training data) for a clearer representation of the positive-class data.",
"degree of convergence is disparate among different classes.",
"Simply setting a uniform threshold might harm the data distribution with instances of longtail relations largely filtered out.",
"Therefore, we leverage a dynamic threshold for filtering noisy data.",
"Suppose the probability of class c of the i th instance is p ic (0 , p hc ) , where p hc is the maximum probability value in class c .",
"Based on empirical experience, we assume the probability values follow a distribution where the noisy data are largely distributed in low-value areas and the clean data are generally distributed in middleor high-value areas.",
"Therefore, the filtering threshold of class c is set to: T h c = T h p hc , p hc = N max i =1 { p ic } (3) where T h is a global threshold.",
"In this way, the noise-filtering threshold not only relies on the degree of convergence in each class, but also dynamically changes during the training phase, thus making it more suitable for noise-filtering on long-tail data.",
"After noise-filtering, the noisy instances are regarded as unlabeled data, which also contain useful information for training.",
"Here, we design a simple strategy for re-labeling these unlabeled data.",
"Given the set of filtered data D u = { s 1 , . . . , s m } , we use the classifier trained in this iteration to predict the probability vectors { p 1 , . . . , p m } .",
"Then, we re-label these instances by: y i = arg max k { p ik } , if max k { p ik } > T h relabel (4) where p ik is the probability of the i th instance in class k, and T h relabel is the re-label threshold.",
"Although effective, simply performing a pipeline of NT, noise-filtering and re-labeling fail to take full advantage of each part, thus the model performance can be further boosted through iterative training.",
"As shown in Fig.2, for each iteration, we first train the classifier on the noisy data using NT: for each instance, we randomly sample K complementary labels and calculate the loss on these labels with",
"Eq.(2).",
"After M -epochs negative training, the noise-filtering and re-labeling processes are carried out for updating the training data.",
"Next, we perform a new iteration of training on the newly-refined data.",
"Here, we re-initialize the classifier in every iteration for two reasons: First, re-initialization ensures that in each iteration, the new classifier is trained on a dataset with higher quality.",
"Second, re-initialization introduces randomness, thus contributing to more robust data-filtering.",
"Finally, we stop the iteration after observing the best result on the dev set.",
"We then perform a round of noise-filtering and re-labeling with the best model in the last iteration to obtain the final refined data.",
"Fig.3(c) shows the data distribution after certain iterations of SENT.",
"As seen, the noise and clean data are separated by a large margin.",
"Most noisy data are successfully filtered out, with an acceptable number of clean data mistaken.",
"However, we can see that the model trained with NT still lacks convergence (with low-confidence predictions).",
"Therefore, we train the classifier on the iteratively-refined data with PT for better convergence.",
"As shown in",
"Fig.3(d), the model predictions on most of the clean data are in high confidence after PT training.",
"The experiments in this work are divided into two parts, respectively conducted on two datasets: the NYT-10 dataset (Riedel et al., 2010) and the TACRED dataset (Zhang et al., 2017).",
"The first part is the effectiveness study on sentence-level evaluation for distant RE.",
"Different from bag-level evaluation, a sentence-level evaluation compute Precision (Prec.), Recall (Rec.) and F1 metric directly on all of the individual instances in the dataset.",
"In this part, we adopt the NYT-10 data set for sentence-level training, following the setting of Jia et al. (2019), who publishes a manually labeled sentence-level test set.",
"4 Besides, they also publish a test set for evaluating noise-filtering ability.",
"Details of the adopted dataset are shown in Table 1.",
"We construct the second part of experiments (Sec.4.4) to better understand SENT's behaviors.",
"Since no labeled training data are available in the distant supervision setting, we construct a noisy dataset with 30% noise from a labeled dataset, TACRED (Zhang et al., 2017) 5 .",
"We regard this constructed dataset as noisy-TACRED.",
"The reason 4 https://github.com/PaddlePaddle/Research/tree/master/ NLP/ACL2019-ARNOR 5 https://github.com/yuhaozhang/tacred-relation Datasets NYT-10 noisy-TACRED #Label num.",
"we choose this dataset is that 80% instances in the training data are no relation.",
"This NA rate is similar to the NYT data which contains 70% NA relation type, thus analysis on this dataset is more credible.",
"When constructing noisy-TACRED, the noisy instances are uniformly selected with 30% noise ratio.",
"Then, each noisy label is created by sampling a label from a complementary class with a weight of class frequency (in order to maintain the data distribution).",
"Note that the original dataset consists of 80% no relation data, which means 80% of the noisy instances are false-positive instances, corresponding to the large amount of false-positive noise in NYT-10.",
"Details of the noisy-TACRED are also shown in Table 1.",
"We compare our SENT method with several strong baselines in distant RE.",
"These compared methods can be categorized as: bag-level denoising methods, sentence-level denoising methods, sentence-level non-denoising methods.",
"PCNN+SelATT (Lin et al., 2016): A bag-level RE model which leverages an attention mechanism to reduce noise effect.",
"PCNN+RA BAG ATT (Ye and Ling, 2019) short for PCNN+ATT RA+BAG ATT, a bag-level model containing both intra-bag and inter-bag attentions to alleviate noise.",
"CNN+RL 1 (Qin et al., 2018): A RL-based bag-level method.",
"Different from CNN+RL 2 , they redistribute the filtered data into the negative examples.",
"CNN+RL 2 (Feng et al., 2018): A sentence-level RE model.",
"It jointly train a instance selector and a 6 Statistics of NYT-10 are quoted from (Jia et al., 2019).",
"CNN classifier using reinforcement learning (RL).",
"ARNOR (Jia et al., 2019): A sentence-level RE model which selects confident instances based on the attention score on the selected patterns.",
"It is the state-of-the-art method in sentence level.",
"CNN (Zeng et al., 2014), PCNN (Zeng et al., 2015) and BiLSTM (Zhang et al., 2015) are typical architectures used in RE.",
"BiLSTM+ATT (Zhang et al., 2017) leverages an attention mechanism based on BiLSTM to capture useful information.",
"BiLSTM+BERT (Devlin et al., 2019): Based on BiLSTM, it utilizes the pre-trained BERT representations as word embedding.",
"As SENT is a model-agnostic framework, we implement the classification model with two typical architectures: BiLSTM and BiLSTM+BERT.",
"Since BiLSTM is also the base model of ARNOR, we can compare these two methods more fairly.",
"During SENT training, we use the 50-dimension glove vectors as word embedding.",
"While for PT after SENT, we randomly initialize the 50-dimension word embedding as the same in ARNOR.",
"In both training phases, we use 50-dimension randomly-initialized position and entity type embedding.",
"We train a single-layer BiLSTM with hidden size 256 using the adam optimizer at a learning rate of 5e-4.",
"When implemented with BiLSTM+BERT, the setting is the same as those with BiLSTM except that we use a 768-dimension fixed BERT representation as word embedding (we use the bert-base-uncased pre-trained model).",
"We tune the hyperparameters on the development set via a grid search.",
"Specifically, when training on the NYT dataset, we train the model for 10 epochs in each iteration, with the global data-filtering threshold T h = 0 .",
"25 , the re-labeling threshold T h relabel = 0 .",
"7 and negative samples number K = 10 .",
"When training on the noisy-TACRED, we train for 50 epochs in each iteration, with T h = 0 .",
"15 , T h relabel = 0 .",
"85 and K = 50 .",
"To deal with the multi-label problem , we utilize a simple method by randomly selecting one of the bag labels for each sentence.",
"Such random selection turns the multi-label noise into the wrong-label noise, which is easier to handle.",
"According to Surdeanu et al. (2012), there are 31% wrong-label noise and 7.5% multi-label noise in NYT-10, and incorrect selection may result in 4% extra wrong-Method Dev Test Prec.",
"label noise, which can be filtered out through NT identically with wrong-label instances.",
"Table 2 shows the results of SENT and other baselines on sentence-level evaluation, where the results of SENT are obtained by PT after SENT.",
"We can observe that: 1) Bag-level methods fail to perform well on sentence-level evaluation, indicating that it is difficult for these bag-level approaches to benefit downstream tasks with exact sentence labels.",
"This result is consistent with the results in Feng et al. (2018).",
"2) When performing sentence-level training on the noisy distant data, all baseline models show poor results, including the preeminent pre-trained language model BERT.",
"These results indicate the negative impact of directly using bag-level labels for sentence-level training regardless of noise.",
"3) The proposed SENT method achieves a significant improvement over previous sentence-level de-noising methods.",
"When implemented with BiLSTM, the model obtains a 4.09% higher F1 score than ARNOR.",
"Moreover, when implemented with BiLSTM+BERT, the F1 score is further improved by 8.52%.",
"4) The SENT method achieves much higher precision than the previous de-noising methods when maintaining comparable or higher recall, indicating the effectiveness of the noise-filtering and re-labeling approaches.",
"In order to prove the effectiveness of SENT in denoising distant data, we conduct a noise-filtering experiment following ARNOR.",
"We use a test set published by ARNOR, which consists of 200 randomly selected sentences with an is noise annotation.",
"We perform a noise-filtering process as described in Sec.3.2.1, and calculate the de-noise accuracy.",
"As seen in Table 3, the SENT method achieves remarkable improvement over ARNOR in F1 score by 12%.",
"While improving in precision, SENT achieves 20% gain over ARNOR in recall.",
"As ARNOR initializes the training data with a small part of frequent patterns, these patterns might limit the model from generalizing to various correct data.",
"Different from ARNOR, SENT leverages negative training to automatically learn the correct patterns, showing better ability in diversity and generalization.",
"In this section, we analyze the effectiveness of the data-refining process with a self-constructed noisy data set: noisy-TACRED (details in Table 1).",
"Table 4 shows the results of training on TACRED and noisy-TACRED.",
"As seen, the baseline model degrades dramatically on the noisy data, with the LSTM dropping by 20.2%.",
"However, after training with SENT, the BiLSTM model can achieve comparable results with the model that trained on the clean data.",
"Note that the de-noising method is quite helpful in promoting the precision score, yet the recall is still lower than that on clean data.",
"We also evaluate the noise-filtering and label-recovering ability on the noisy-TACRED training set, as shown in Fig.4.",
"We can observe that: 1) SENT achieves about 85% F1 score on the noisy-TACRED data.",
"This result is consistent with the noise-filtering results obtained on the NYT dataset (with 200 sampled instances), validating the denoising ability of SENT on different datasets.",
"2) As the training iteration progressed, the precision of noise-filtering decreases with the recall promoting.",
"More noise-filtering contributes to a cleaner dataset, while it might bring more false-noise mistakes.",
"Therefore, we stop the iteration when the model reaches the best score on the development set.",
"3) As for label-recovering, SENT can achieve about 70% precision with about 25% recall.",
"Here, the threshold setting is also a trade-off that we prefer to adopt a modest value for more accurate re-labeling.",
"As described in Sec.3.2, we design a dynamic filtering threshold for long-tail data.",
"The effect of this strategy is shown in Fig.5.",
"As seen, the degree of convergence of the long-tail relation per:cause of death is much lower than that of the head relation.",
"Simply setting a uniform threshold would harm the data distribution with instances of per:cause of death largely filtered.",
"While with a dynamically determined threshold, both data from the head and the long-tail relations are appropriately filtered.",
"To better illustrate the contribution of each component in SENT, we conduct an ablation study by removing the following components: final PT, re-labeling, dynamic threshold, re-initialization, NT.",
"The test results are shown in Table 6.",
"We can observe that: 1) Removing the final positive training affects little to the performance.",
"This is because the model trained with NT already reaches high accuracy and the purpose of final PT is only to achieve more confidential predictions.",
"2) Removing the re-labeling process harms the performance, as the filtered instances are simply discarded regardless of the useful information for Sentence Bag label Sentence label Refined label The plan filed on behalf of the state 's Democratic Congressional delegation , for instance , would make the 25th district , which zigzags 300 miles from southern Austin to Mexico , much shorter and Austin-based , which would help the incumbent Democrat , Lloyd Doggett .",
"training.",
"3) Without dynamic threshold, clean instances from the tail classes are incorrectly filtered out, which severely degrades the performance.",
"4) Re-initialization also contributes a lot to the performance.",
"The model trained on the original noisy data inevitably fits to the noisy distribution, while re-initialization helps wash out the overfitted parameters and eliminate the noise effects, thus contributing to better training and noise-filtering.",
"5) Training with PT instead of NT causes a dramatic decline in performance, especially on the precision, which verifies the effectiveness of NT to prevent the model from overfitting noisy data.",
"As discussed, SENT is able to refine the distant RE dataset.",
"In fact, there exists much noise in the NYT data that is difficult to tackle with bag-level methods.",
"In Table 5, we show some examples.",
"(1) The first two rows are the sentences in a multi-label bag.",
"We randomly choose one of the bag labels for each sentence, and the model is able to correct the bad choice (by correcting the second sentence with place lived and the first sentence with NA).",
"(2) The following three rows show a bag with label place of death, while this whole bag is actually a NA bag incorrectly labeled positive.",
"(3) SENT can also recognize the positive samples in NA.",
"As shown in the last three rows, each sentence labeled as NA is actually expressing a positive label.",
"In fact, such false-negative problem is frequently seen in the NYT data, which contains 70% negative instances that were labeled NA only because the entity pairs do not participate in a relation in the database.",
"We believe the capacity to recognize these false-negative samples can significantly boost the performance.",
"In this paper, we present SENT, a novel sentence-level framework based on Negative Training (NT) for sentence-level training on distant RE data.",
"NT not only prevent the model from overfitting noisy data, but also separate the noisy data from the training data.",
"By iteratively performing noise-filtering and re-labeling based on NT, SENT helps re-fine the noisy distant data and achieves remarkable performance.",
"Experimental results verify the improvement of SENT over previous methods on sentence-level relation extraction and noise-filtering effect.",
"The authors wish to thank the anonymous reviewers for their helpful comments.",
"This work was partially funded by China National Key R&D Program (No. 2018YFB1005100), National Natural Science Foundation of China (No. 61976056, 62076069), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Domain classification is the fundamental task in natural language understanding (NLU), which often requires fast accommodation to new emerging domains.",
"This constraint makes it impossible to retrain all previous domains, even if they are accessible to the new model.",
"Most existing continual learning approaches suffer from low accuracy and performance fluctuation, especially when the distributions of old and new data are significantly different.",
"In fact, the key real-world problem is not the absence of old data, but the ineffi-ciency to retrain the model with the whole old dataset.",
"Is it potential to utilize some old data to yield high accuracy and maintain stable performance, while at the same time, without introducing extra hyperparameters?",
"In this paper, we proposed a hyperparameter-free continual learning model for text data that can stably produce high performance under various environments.",
"Specifically, we utilize Fisher information to select exemplars that can record key information of the original model.",
"Also, a novel scheme called dynamical weight consolidation is proposed to enable hyperparameter-free learning during the retrain process.",
"Extensive experiments demonstrate that baselines suffer from fluctuated performance and therefore useless in practice.",
"On the contrary, our proposed model CCFI significantly and consistently outperforms the best state-of-the-art method by up to 20% in average accuracy, and each component of CCFI contributes effectively to overall performance.",
"Catastrophic forgetting is the well-known Achilles' heel of deep neural networks, that the knowledge learned from previous tasks will be forgotten when the networks are retrained to adapt to new tasks.",
"Although this phenomenon has been noticed as early as the birth of neural networks (French, 1999; McCloskey and Cohen, 1989), it didn't attract much attention until deep neural networks have achieved impressive performance gains in various applications (LeCun et al., 2015; Krizhevsky et al., 2012).",
"Domain classification is the task that mapping the spoken utterances to natural language understanding domains.",
"It is widely used in intelligent personal digital assistant (IPDA) devices, such as Amazon Alexa, Google Assistant, and Microsoft Cortana.",
"As many IPDAs now allow third-party developers to build and integrate new domains (Ku-mar et al., 2017), these devices are eager to continual learning technologies that can achieve high performance stably (Kim et al., 2018a,b).",
"However, most traditional IPADs only work with well-separated domains built by specialists (Tur and De Mori, 2011) or customized designed for specific datasets (Li et al., 2019).",
"There is still a lack of continual learning methods that capable of general domain classification.",
"Most previous approaches capable of continual learning focus on the scenario that the new model should be retrained without any access to old data (Li and Hoiem, 2016; Kirkpatrick et al., 2017; Lopez-Paz and Ranzato, 2017).",
"However, these methods often involve more parameters that require extensive efforts in expert tuning.",
"And when data distributions of new tasks are obviously different from the original task (e.g. class-incremental learning), these approaches can hardly maintain good accuracy for both tasks and may suffer from fluctuations in performance.",
"On the other hand, old data are not unavailable in many practical cases.",
"The main concerns arise from the tremendous cost in memory and computation resources, if the model is updated with huge previous datasets.",
"Also, most continual learning approaches are applied to image data that little attention has been paid to text data.",
"Is it possible to develop a desirable model capable of continual learning that satisfies the following qualities?",
"1) High accuracy with limited old data .",
"Compared to the extreme cases that no access or full access to old data, it is more practical to put models under the setting with a limited amount of old data available (e.g., no more than 10% of original data).",
"In this case, models can achieve good performance without too much additional cost in physical resources and can be conveniently renewed with periodical system updates.",
"2) High stability with zero extra parameters .",
"Many continual learning models can perform well only under restricted experiment settings, such as specific datasets or carefully chosen parameters.",
"However, practical datasets are usually noisy and imbalanced distributed, and inexperienced users can't set suitable parameters in real-world applications.",
"Therefore, it is desirable to develop a hyperparameter-free model that can work stably under various experimental environments.",
"To achieve these goals, we proposed a Conti-nous learning model based on weight Consolidation and Fisher Information sampling (CCFI), with application to domain classification.",
"The main challenge is how to remember information from original tasks, not only the representative features from data, but also the learned parameters of the model itself.",
"This is a non-trivial contribution since uncontrollable changes will happen to neural network parameters whenever the feature changes.",
"To avoid such uncontrollable changes, previous work iCarL even discards deep neural networks as its final classifier and turns to k-nearest neighbors algorithm for actual prediction (Rebuffi et al., 2017).",
"Our work demonstrates that these changes are controllable with exemplar selected by Fisher information and parameters learned by Dynamical Weight Consolidation.",
"Our contributions can be summarized as follows.",
"Fisher information sampling .",
"Good exemplars are required to remember key information of old tasks.",
"Unlike previous work using simple mean vectors to remember the information of old data, exemplars selected by Fisher information record both the features of data and the information of the original neural network.",
"Dynamical weight consolidation .",
"The need for hyperparameter is an inherited problem of regularization-based continual learning.",
"Previous works search for this hyperparameter by evaluating the whole task sequence, which is supposed not to be known.",
"This work provides a simple auto-adjusted weighting mechanism, making the regularization strategy possible for a practical application.",
"Also, traditional weight consolidation methods such as EWC (Kirkpatrick et al., 2017) are designed for sequential tasks with similar distributions.",
"We extend it to the incremental learning scenario and add more regularity to achieve better stability.",
"Extensive experimental validation .",
"Most existing continual learning methods are designed for image data, while few previous attempts working on text data are often limited to specific usage scenarios and rely on fine-tuned parameters.",
"Our proposed CCFI model is a general framework that can be efficiently applied to various environments with the least efforts in parameter tuning.",
"Our extensive experimental results demonstrate the proposed CCFI can outperform all state-of-the-art methods, and provide insights into the working mechanism of methods.",
"Although most of the existing approaches are not directly applicable to our problem, several main branches of research related to this work can be found: exemplar selection, regularization of weights, and feature extraction or fine-tune method based on pre-trained models.",
"Our problem is closest to the setting of exemplar selection methods (Rebuffi et al., 2017; Li et al., 2019).",
"These approaches store examples from original tasks, and then combine them together with the new domain data for retraining.",
"iCarL (Rebuffi et al., 2017) discards the classifier based on neural network to prevent the catastrophic forgetting, and turns to traditional K-nearest neighbor classifier.",
"To avoid large changes of important parameters, regularization models (Kirkpatrick et al., 2017; Li and Hoiem, 2016; Zenke et al., 2017) add constraints to the loss function.",
"They usually introduce extra parameters requiring careful initialization.",
"And it has been shown that their performance will drop significantly if the new tasks are drawn from different distributions (Rebuffi et al., 2017).",
"On the contrary, our proposed CCFI is a parameter-free model that can produce stable performance under various experimental environments.",
"Feature extraction methods utilize pre-trained neural networks to calculate features of input data (Donahue et al., 2014; Sharif Razavian et al., 2014).",
"They make little modifications to the original network but often result in a limited capacity for learning new tasks.",
"Fine-tuning models (Girshick et al., 2014) can modify all the parameters in order to achieve better performance in new tasks.",
"Although starting with a small learning rate can indirectly preserve the knowledge learned from the original task, fine-tuning method will eventually tend to new tasks.",
"Adapter tuning (Houlsby et al., 2019) can be viewed as the hybrid of fine-tune and feature extraction.",
"Unlike our model that makes no changes to the backbone model, the Adapter tuning method increases the original model size and introduces more parameters by inserting adapter modules to each layer.",
"Given data stream D = { x i , y i } Ni =1 , the classification tasks in deep learning neural networks are equal to find the best parameter set that can maximize the probability of the data p ( D| ) .",
"Namely, the classifier can make predictions Y that best reproduce the ground truth labels Y .",
"Under the continual learning setting, new data D n of additional classes will be added to the original data stream D o in an incremental form.",
"Our goal is to update the old parameters o (trained on original data stream D o ) to the new parameters n , which can work well on both new data D n and old data D o .",
"In this paper, the initial model is trained with the original data set D o , and will output the trained model o .",
"In this training process, Fisher Information Sampling is conducted to select the most representative examples that can help to remember the parameters of the initial trained model.",
"In the retraining process, the renewed model is learned based on Dynamical Weight Consolidation , and evaluated on the training set consisted of new classes and the old exemplars.",
"The critical problem of exemplar set selection is: what are good examples that can maintain the performance of old tasks?",
"The state-of-the-art method iCaRL (Rebuffi et al., 2017) selects examples close to mean feature representation, and CoNDA (Li et al., 2019) follows the same scheme to domain adaptation on text data.",
"To utilize the advantage of the mean feature and avoid catastrophic forgetting, iCaRL chooses k-nearest neighbors algorithm as the classifier rather than deep neural networks, although the latter is proved to be a much better performer.",
"Is there any exemplar selection method that can utilize the powerful deep learning models as the classifier, and at the same time, remember the key information of old tasks?",
"Fisher information measures how much information that an observable random variable carries about the parameter.",
"For a parameter in the network trained by data D , its Fisher information is defined as the variance of the gradient of the log-likelihood: I ( ) = V ar ( s ( )) (1) = E \" @ @ log p ( D| ) @ @ log p ( D| ) T # .",
"Fisher information can be calculated directly, if the exact form of log p ( D| ) is known.",
"However, the likelihood is often intractable.",
"Instead, empirical Fisher information I ( ) is computed through data d i 2 D drawn from p ( D| ) : I ( ) = 1 NNX i =1 @ @ log p ( d i | ) @ @ log p ( d i | ) T .",
"(2) From another point of view, when log p ( D| ) is twice differentiable with respect to , Fisher information can be written as the second derivative of likelihood: I ( ) = \u0000 E @ 2 @ 2 log p ( D| ) \u0000 .",
"(3) According to Equation 3, three equivalent indications can be made to a high value in Fisher information I ( ) : a sharp peak of likelihood with respect to , can be easily inferred from data D , data D can provide sufficient information about the correct value for parameter .",
"Jointly thinking about the calculation form introduced by empirical Fisher information (Equation 2) and the physical meaning of Fisher information revealed by the second derivative form (Equation 3), we can find a way to measure how much information each data d i carries to the estimation of parameter , which we call as empirical Fisher information difference: \u0000 I i ( ) = @ @ log p ( d i | ) @ @ log p ( d i | ) T .",
"Instead of simple mean feature vectors used in previous work (Rebuffi et al., 2017; Li et al., 2019), we use the empirical Fisher information difference to select exemplar set.",
"Specifically, CCFI model makes use of BERT (Devlin et al., 2019) for text classification.",
"The base BERT model is treated as feature extractor \u0000 : X !",
"H , which takes input token sequences X , and outputs the hidden representations H .",
"To predict the true label Y , a softmax classifier is added to the top of BERT: p ( Y | X, ) = \u0000 ( WT H ) = \u0000 ( WT \u0000 ( X )) , (5) where W is the task-specific parameter matrix for classification.",
"The trained parameters can therefore be split into the fixed feature extraction part \u0000 and variable weight parameter W .",
"In continual learning setting, we denote W k 2 R h k as the most up-to-date weight matrix, where k is the number of classes that have been so far observed, and h is the size of the final hidden state H .",
"Remember that, for the classification task, the best parameters that can maximize the probability of the data p ( D| ) are also the parameters that make predictions Y closest to the ground truth label Y .",
"Therefore, we can take Equation 5 into Equation 4, in order to get the practical computation of empirical Fisher information difference for data d i on parameter .",
"Since the parameters of feature extractor \u0000 are fixed, only empirical Fisher information difference of parameters in weight matrix w j 2 W are calculated: \u0000 I i ( w j ) = @ @ w j log [ \u0000 ( WT \u0000 ( x i ))] ( y i ) 2 , (6) where the likelihood p ( d i | ) is calculated via the log-probability value of the correct label y i of input x i .",
"And the total empirical Fisher information difference data d i carrying is the sum over all w j 2 W : \u0000 I i = h k X j =1 \u0000 I i ( w j ) .",
"Algorithm 1 describes the exemplar selecting process.",
"Within each class k , the samples top ranked by empirical Fisher information difference are selected as exemplars, till the targeted sample rate (e.g., 1% ) is met.",
"The main goal of retraining process is: how to achieve good performance for both new and old tasks?",
"EWC (Kirkpatrick et al., 2017) is proved to be a good performer that can balance the performance of old and new task.",
"However, rather Algorithm 1: Construction of exemplar set Input: original data stream D o Input: trained neural network o = { \u0000 , W o } Input: sample rate r 1 for each data d i do 2 for each parameter w j 2 W o do 3 calculate \u0000 I i ( w j ) using Equation 6 4 calculate \u0000 I i using Equation 7 5 for each class k do 6 rank the samples d i by \u0000 I i 7 select the top |D k | r examples as E k Output: exemplar set E { E 1 , ..., E k } than incremental learning problem studied in this paper, EWC is designed for the tasks with same class number but different in data distributions.",
"Furthermore, EWC requires careful hyperparameter setting, which is unrealistic to be conducted by inexperienced users.",
"In this section, we introduce a scheme named Dynamical Weight Consolidation, which can avoid the requirement of such hyperparameter setting.",
"Also, this scheme is demonstrated to perform more stably than traditional EWC in the experimental part.",
"Specifically, our loss function during the retraining process can be viewed the sum of two parts: loss ` n calculated by the correct class indicators of new data and loss ` e to reproduce the performance of old model: ` n = \u0000 X y 2 Y n y log y, (8a) ` e = \u0000 X y 2 E y log y + \u0000 2 h ( k o + k n ) X j =1 I ( w j )( w nj \u0000 w oj ) 2 .",
"(8b)",
"The loss function ` e can be further divided into two parts: the cross entropy of exemplar set, and the consolidation loss caused by modifying parameters with high Fisher information.",
"In traditional EWC model, the weight \u0000 that balances cross entropy and consolidation loss is a fixed value.",
"In our CCFI model, \u0000 is updated dynamically based on current values of cross entropy and consolidation loss: \u0000 = 666666 64 lg \u0000 P y 2 Y n y log y h ( k o + k n ) P j =1 I ( w j )( w nj \u0000 w oj ) 2 777777 75 .",
"Note that, the I ( w j ) is the element in the updated",
"parameter information table T n .",
"The details can be found in the section 3.3.2.",
"This part describes the overall process of the CCFI model.",
"First we list the outputs of the old tasks, then we introduce the detailed implements of retraining.",
"After training the model with old data ( k o classes), the outputs of the old task include:1) trained model o ; 2) exemplars E of old data, and 3) parameter information table T o .",
"Each element in the parameter information table T o is the empirical Fisher information I ( w oj ) of the old task, which can be computed through Equation 2 during the initial training process.",
"The retraining process can be described as follows:",
"1. Load freeze feature extractor : The feature extractor \u0000 is kept unchanged, which means the BERT encoder with transformer blocks and self-attention heads are freezed.",
"2. Update variable weight matrix : To adopt the new classes data X n , the original variable weight matrix W k o is extended to W k o + k n 2 R h ( k o + k n ) , where the first k o columns are kept the same with the original model and the new k n columns are initialized with random numbers.",
"3. Update parameter information table : Similar to variable weight matrix, parameter information table T o is a matrix with dimension h k o .",
"To adopt the new data, it is extended to the new matrix T n with dimension h ( k o + k n ) , where the first k o columns are same with T o and the new k n columns are initialized with zero.",
"In this way, the new model can freely update the the new k n columns to lower classification loss, but will receive penalty when modifying important parameters in the original k o columns.",
"In this section, the CCFI model is first compared with the state-of-the-art methods under a continual setting.",
"And further evaluations are conducted to examine the effectiveness of the individual components within CCFI model.",
"Datasets .",
"We evaluated our proposed CCFI and comparison methods on public available 150-class dataset (Larson et al., 2019) and real-world (even product) 73-domain dataset The details of datasets can be found in Appendix A.1.",
"Baselines .",
"iCaRL (Rebuffi et al., 2017) and CoNDA (Li et al., 2019), are the closest continual learning approaches to CCFI, which are designed for the scenario with access to old data.",
"We also add fine-tune and the fixed feature method as baselines.",
"To make fair comparisons, CCFI and all the baselines use the same BERT backbone (Devlin et al., 2019), and observe the same amount of old data in all learning tasks.",
"The implementation details can be found in Appendix A.2.",
"In the main body of experiments, we report the results with the framework consisted of BERT backbone and one-layer linear classifier.",
"We also conducted experiments with a multiple-layer classifier, which can be found in Appendix A.3.",
"Two key factors play in the performance of continual learning: 1) the number of new classes for retraining, and 2) the amount of old observable data.",
"In this section, we first validate our model through a class-incremental learning task, by",
"keep-(a) Overall accuracy.",
"ing the amount of old observable data fixed and changing the number of new classes.",
"We also study the effects of different exemplars by keeping the number of new classes unchanged but varying old observable data.",
"Class-incremental Learning .",
"In this part, we conduct the class-incremental learning evaluation on both 150-class and 73-domain dataset.",
"Class-incremental learning can be viewed as the benchmark protocol for continual learning with access to old data (Rebuffi et al., 2017; Li et al., 2019).",
"Specifically, after the initial training, new domains will be added in random order.",
"After adding each batch of new data, the results are evaluated on the current data set, considering all classes have been trained so far.",
"Figure 1 and Figure 2 show the performance of class-incremental learning on 73-domain dataset and 150-class dataset.",
"CCFI outperforms other methods in all tasks on both datasets.",
"Specifically, several observations can be made as follows.",
"Overall performance .",
"Under the same new class number, CCFI always achieves the best overall accuracy among all methods.",
"And this performance gap is enlarged with more new classes added for retraining.",
"Performance fluctuations .",
"Fine-tune method is unstable in performance.",
"It is the second performer on the 73-domain dataset.",
"However, it quickly drops to almost zero and displays fluctuations on the 150-class dataset, even if the experiment conducted on the 150-class dataset is set with a higher observable ratio of old data.",
"Accuracy stage .",
"Both the fixed feature method and CoNDA display the pattern of perfor-mance stage with more new classes added, and CoNDA enjoys a larger stage than the fixed feature method.",
"For example, as shown in Figure 1a, CoNDA maintains stable performance with 5 to 12 newly added classes varying and then suddenly drops.",
"Predictable performance .",
"Both CCFI and iCaRL display linear patterns in overall performance.",
"It means the possibility to predict and estimate class-incremental learning performance, which is a preferable feature in many applications.",
"But iCaRL starts at a lower accuracy and drops much faster than CCFI, probably because it discards the neural network and tunes to the simple k-nearest neighbors algorithm as the final classifier.",
"This phenomenon also confirms that CCFI can enjoy the excellent performance of neural network classifiers and overcome its drawback of catastrophic forgetting.",
"To provide insight into the working mechanism of models capable of continual learning, we conduct experiments by varying the exemplar set's size with the number of new classes fixed.",
"Figure 3 shows the model performance under the effect of different exemplar sizes by changing the observable ratios of old data.",
"Overall pattern .",
"CCFI continues to beat baselines with obvious advantages in performance.",
"Especially, CCFI can achieve high accuracy with a minimal amount (e.g.,1%) of old data, although all methods can obtain performance improvement by increasing the ratio of old observable data.",
"A dramatic performance gain can also be observed from all models when the observable ratio of old data increases from zero to non-zero values.",
"This phenomenon further confirms that our experimental setting with limited access to old data is practically useful.",
"Consistent improvement .",
"CCFI, CoNDA, and iCarL obtain consistent improvements when increasing the ratio of old observable data.",
"ever, the fixed feature method doesn't get apparent benefits with more old data.",
"This phenomenon indicates more observations of old data are not the guarantee for better performance.",
"And it further confirms the necessity of developing continual learning methods that can effectively utilize the information learned from exemplars.",
"Our proposed CCFI outperforms all the state-of-the-art methods.",
"To provide further insights into its working mechanism, additional experiments are conducted to examine individual aspects of CCFI.",
"In order to avoid the occasionalities introduced by data and model complexity, components are examined on a synthetic data set by simple neural networks with fixed weight initialization.",
"Specifically, we generate a synthetic dataset of 10 completely separable classes, and each class includes 1,000 examples.",
"As the setting for continual learning, we use six classes for initial training, and four classes as additional new classes for retraining.",
"The neural network used in this section is a simple network with two fully-connected layers.",
"The first layer is served as a feature extractor, which is fixed after the initial training.",
"The second layer is used as a classifier that can be tuned during retrain.",
"To ensure other parts won't affect the component to be validated, the neural networks are initialized with the same weight matrix generated by a fixed random seed.",
"With these settings, the results can best reflect the true contribution of components.",
"First, we analyze the effectiveness of the dynamical weight consolidation component.",
"Figure 4 plots the consolidation loss (second part in Equation 8b) of model using traditional fixed weight and our proposed dynamical weight consolidation.",
"Several observations can be made as follows.",
"Fixed weight with big value .",
"When the weight ( \u0000 in Equation 8b) is set by a big value (e.g., 10 25 in Figure 4a), the consolidation loss is hard to converge and suffers from fluctuations.",
"Fixed weight with small value .",
"Oppositely, if the weight is initialized with a relatively small value (e.g., 10 2 in Figure 4b), the consolidation loss is too small to be effective.",
"In fact, as can be observed from Figure 4b, under the small weight setting, the consolidation loss even experiences an increase first before it slowly decreases.",
"The increase in consolidation indicates that the neural network tends to sacrifice consolidation loss to lower the overall loss.",
"Furthermore, this phenomenon happens when the new model modifies the important parameters learned by the original model, which are supposed to be kept with the least changes for the continual learning purpose.",
"Dynamical weight consolidation .",
"In contrast to the unstable performance of the traditional method, as shown in Figure 4c, consolidation loss converges fast and stable by using our proposed dynamical weight consolidation.",
"The second set of experiments validate whether Fisher information sampling is indeed beneficial to the overall performance, compared with using",
"To examine how much improvement can be obtained by Fisher sampling alone, we remove the weight consolidation component in this section.",
"Thus the results reported here are outputs of the simple two-layer model by using exemplars during retraining.",
"From another point of view, these results show the amount of information the exemplars carrying from the original model.",
"Figure 5 plots the accuracy of the old task during the retraining process.",
"Although the network is retrained with only a small set of old data, the accuracy is computed over all old data to fully examine the quality of exemplars.",
"Since the classes in synthetic data are fully separable, the accuracy will be 100% eventually.",
"Therefore, the quality of exemplars is demonstrated by the converging speed.",
"A faster converge speed provided by an exemplar set is of great significance in three aspects: Better computational efficiency .",
"obvious benefit indicated by the faster converge speed is, the better computational efficiency since the model requires less retraining time.",
"Less damage to original model .",
"A faster converge speed indicates less damage to the original model.",
"All weight consolidation schemes act like buffers for the old parameter.",
"With these schemes, old parameters will slow down their changes when new tasks come.",
"To best cope with the consolidation schemes, good exemplars should achieve comparable good performance with fewer retraining epochs, since more retraining epochs mean that the new model has modi-fied more parameters from the original network.",
"More information of original dataset .",
"As mentioned above, under the synthetic data is fully separable, the accuracy will be 100% eventually.",
"In this case, a faster speed can be converted to more information, as experiments with more data always require fewer epochs to reach the states of convergence.",
"For example, as shown in Figure 5, much more epochs are needed under sampling rate 0.5% than that of 1%.",
"Figure 5 shows, exemplars generated by Fisher sampling can achieve much faster converge speed than randomly selected exemplars, which proves Fisher sampling alone can contribute contribution effectively to the overall performance.",
"This paper proposes a hyperparameter-free model called CCFI for continuous domain classification.",
"CCFI can record information of old models via exemplars selected by Fisher information sampling, and conduct efficient retraining through dynamical weight consolidation.",
"The comparison against the existing models reveals CCFI is the best performer under various experimental environments, without additional efforts in hyperparameter searching.",
"The general statistics of the 150-class dataset 1 and",
"73-domain dataset are listed below.",
"150-class dataset: balanced dataset with 150 intents that can be grouped into 10 general domains.",
"Each intent has 100 training queries, 20 validation, and 30 testing queries.",
"73-domain dataset: imbalanced dataset with 73 domains.",
"Each domain contains 512 examples on average.",
"However, this dataset is highly imbalanced that the largest domain includes 1,771 examples, while the smallest domain only has 27 examples.",
"In both datasets, we split examples of each class into 90% for training, 5% for validation, and 5% for testing.",
"All classification accuracy results are evaluated on the test set.",
"Specific settings .",
"In our implement of CoNDA, we pick up hyperprameter \u0000 pos = 0 .",
"5 and \u0000 neg = 0 .",
"3 .",
"The fixed-feature method freezes 12 layers of BERT after the initial training.",
"Only the parameters in the new classifier layer are allowed for tuning, which in a way provides the function of continual learning.",
"Fine-tune method can modify parameters in all 12 layers of BERT, which can be viewed as the network with little prevention of catastrophic forgetting.",
"General settings .",
"Adam optimizer is used in all learning processes, and the learning rate is always set to be 0 .",
"00001 .",
"All runs are trained on 4 V100 GPUs with a batch size of 32.",
"Our exam-1 https://github.com/clinc/oos-eval ple code can be found at: https://github.com/ tinghua-code/CCFI A.3",
"To examine the effect of classifier layer number (amount of retrainable parameters), we run experiments under two frameworks.",
"The first framework is the same as the one used in the main experimental part, which consists of a 12-layer BERT feature extractor and a one-layer linear classifier.",
"The second framework keeps the BERT feature extractor unchanged and adds one more layer to the classifier.",
"The results are listed in Table 1 and 2, and several observations can be made as follows.",
"CCFI still remains the best performer.",
"Our proposed CCFI model produces good performance regardless of the number of layers in the classifier.",
"This phenomenon further confirms its effectiveness and stability.",
"CoNDA is the second-best performer in both frameworks.",
"Notably, the retraining performance of CoNDA increases when we increase the number of layers.",
"BERT finetune and feature extraction method become worse when increasing the number of layers.",
"These two baselines are sensitive to the structure of the classifier, which indicates the superficial variations of pre-trained models are not enough for continual learning.",
"One-layer classifier works well enough with BERT.",
"As can be seen from Table 1 and 2, the initial training results of all methods degrade when increasing the number of classifier layers.",
"Therefore, we report the results based on a one-layer linear classifier in the main body of the paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result"
] |
[
"Recent research has demonstrated that goal-oriented dialogue agents trained on large datasets can achieve striking performance when interacting with human users.",
"In real world applications, however, it is important to ensure that the agent performs smoothly interacting with not only regular users but also those malicious ones who would attack the system through interactions in order to achieve goals for their own advantage.",
"In this paper, we develop algorithms to evaluate the robustness of a dialogue agent by carefully designed attacks using adversarial agents.",
"Those attacks are performed in both black-box and white-box settings.",
"Furthermore, we demonstrate that adversarial training using our attacks can significantly improve the robustness of a goal-oriented dialogue system.",
"On a case-study of the negotiation agent developed by (Lewis et al., 2017), our attacks reduced the average advantage of rewards between the attacker and the trained RL-based agent from 2 .",
"68 to 5 .",
"76 on a scale from 10 to 10 for randomized goals.",
"Moreover, with the proposed adversarial training, we are able to improve the robustness of negotiation agents by 1.5 points on average against all our attacks.",
"Crafting an intelligent agent to communicate in the dialogue system using natural languages has been a long-standing problem in AI.",
"It requires designing an agent to understand, plan and generate natural language to achieve different goals such as question-answering, cooperation, negotiation etc (Vinyals and Le, 2015; Li et al., 2017; Serban et al., 2016; Dhingra et al., 2016; Serban et al., 2016).",
"Inspired by recent successes in deep neural networks, (Lewis et al., 2017) has recently developed an end-to-end learning framework to train a recurrent neural network (RNN)-based negotiation agent in goal-oriented dialogue systems.",
"This NN-based technique has been identified as one of the state-of-the-arts and has been applied to several other tasks (Bahdanau et al., 2014; Luong et al., 2015; Rush et al., 2015; Chan et al., 2016).",
"Although NN-based dialogue agents have shown convincing performance on several tasks, it is not clear whether they also work well when facing malicious users or agents.",
"To answer this question, we study how to evaluate the robustness of a goal-oriented dialogue system.",
"For simplicity, we consider a goal-oriented agent A that aims to maximize some score, and define the robustness of A as the worst-case performance under any feasible agent A (cid:48) .",
"We also call A (cid:48) an adversarial agent that tries to attack A since it aims to minimize A 's score.",
"The problem of evaluating the robustness of A can then be solved by designing an adversarial agent to attack A .",
"For instance, considering a negotiation agent that can decide when to make a deal, we say the agent is not robust if an adversarial agent can fool the target agent to make a deal with significant lower scores.",
"Ideally, before deploying an agent into real systems, we need to ensure it performs smoothly under strong adversarial attacks.",
"The concept of adversarial agent is related to recent studies on adversarial examples for image classifiersit has been shown that a carefully designed small perturbation can easily make neural networks mis-classify (Goodfellow et al., 2014; Szegedy et al., 2013; Moosavi Dezfooli et al., 2016; Carlini and Wagner, 2017; Cheng et al., 2019), and several recent works has extended these attacks to natural language processing models such as sentiment analysis (Gao et al., 2018; Yang et al., 2018) and machine translation (Ebrahimi et al., 2018; Cheng et al., 2018).",
"However, all of the previous work consider attacking a static model, where except input im-age/sentence there is no interaction between the attacker and the target model.",
"Instead, we investigate a much more challenging problem, where there can be many turns of interactions between adversarial and target agents.",
"This leads to several difficulties including 1) How to lead the target agent to a bad state and 2) how to force the target agent to make a wrong decision.",
"Therefore, previous methods for attacking static models cannot be directly applied.",
"In this paper, we tackle the aforementioned challenges by proposing several novel ways to design an adversarial agent to evaluate the robustness of goal-oriented dialogue systems.",
"We highlight our major contributions as follows: We propose a framework to generate adversarial agents in both black-box and white-box settings.",
"To the best of our knowledge, this is the first work on crafting adversarial agents instead of adversarial examples in an interactive dialogue system.",
"We conduct a series of studies on the negotiation agent proposed in (Lewis et al., 2017).",
"We demonstrate that the proposed strategies can successfully attack existing negotiation agents to significantly reduce their average score.",
"For instance, our attacks can reduce the average advantage of the RL-based negotiation agent from 2 .",
"68 to 5 .",
"76 on random problems with the total value of 10 .",
"We also show that through the proposed iterative adversarial training procedure, we could significantly improve the robustness of a goal-oriented agent against various attacks.",
"Goal-oriented dialogue systems aim at building a conversation model that is capable of accomplishing tasks through the interactions with human using natural language (Li et al., 2017; Eric and Manning, 2017; Wen et al., 2016; Wei et al., 2018; Bordes et al., 2016).",
"Traditional approaches to learn a goal-oriented intelligent agent relies heavily on dialogue states annotated in the training data (Wen et al., 2016; Henderson et al., 2014).",
"The use of state annotations allows a cleaner separation of the reasoning and natural language aspects of dialogues.",
"However, it is very expensive to annotate every state in a large amount of training data.",
"(Bordes et al., 2016) explores end-to-end goal orientated dialogue with a supervised model.",
"And (He et al., 2017) uses task-specific rules to combine the task input and dialogue history into a more structured state representation.",
"Recently, reinforcement learning has been widely used in dialogue systems to increase the agent versatility (Mordatch and Abbeel, 2017) and improve the agent's performance in goal-oriented tasks such as cooperative bot-bot dialogues (Das et al., 2017) and negotiation tasks (Lewis et al., 2017).",
"Algorithms have been proposed to craft adversarial sentences in NLP applications.",
"(Papernot et al., 2016) uses Fast Gradient Sign method to generate adversarial example on RNN/LSTM based model.",
"(Li et al., 2016) learns the importance of words by deleting them in sentiment analysis task and then use reinforcement learning to locate such words.",
"(Samanta and Mehta, 2017) and (Liang et al., 2017) generate adversarial sequences by inserting or replacing existing words with typos and synonyms.",
"(Gao et al., 2018) aims to attack sentiment classification models in a black-box setting.",
"It develops some scoring functions to find the most important words to modify.",
"(Jia and Liang, 2017) aims to fool the SQuAD reading comprehension system by adding crafted sentences.",
"(Yang et al., 2018) proposes a greedy algorithm to swap the word/character and uses a Gumbel softmax function to reduce the computation.",
"(Ebrahimi et al., 2018) aims to generate adversarial examples on character CNN model in machine translation problem by using Jacobian matrix to determine which word/character should be replaced or deleted.",
"(Zhao et al., 2017) generated natural adversarial example using Generative Adversarial Networks (GANs).",
"(Cheng et al., 2018) proposed a framework to conduct non-overlapping and targeted keyword attack on seq2seq model.",
"All the above-mentioned work focus on the static setting, i.e., the input does not depend on the model's output.",
"However, in our work, one agent's input depends on the other agent's output, which makes the input undecidable in the beginning.",
"Therefore, an adversarial sentence or example is not enough to conduct attack in dialogue systems.",
"Instead, for the first time, we propose novel ways to construct a adversarial agent, which can bait the target agent to step to a wrong state and make a bad decision.",
"Many defense algorithms have been proposed recently to enhance the robustness of classification models.",
"Among them, adversarial training (Madry et al., 2018; Goodfellow et al., 2014) has become one of the most successful methods, which uses both clean and adversarial examples to train a robust model.",
"(Wong and Kolter, 2018) proposed another kind of adversarial training to improve the verification lower bound of neural networks; (Liu and Hsieh, 2019) combines the idea of generative adversarial network (GAN) and adversarial training to further boost the robustness on test images.",
"Another promising way to enhance robustness is by adding randomness to the model.",
"(Liu et al., 2018) shows adding randomness to both input and intermediate layers of neural networks can improve robustness; (Liu et al., 2019; Ye and Zhu, 2018) show that combining Bayesian neural network (with randomized weights) and adversarial training can achieve state-of-the-art adversarial error under attacks.",
"However, all the existing defense methods only work for static models (usu-ally for classification tasks).",
"In this paper, we propose an adversarial training algorithm for an agent using RL with an adversarial agent.",
"This is to our knowledge the first algorithm for improving the robustness of an agent.",
"We use the negotiation agent developed in (Lewis et al., 2017) as the running example in this paper.",
"Note that our algorithm can be generalized to other goal-oriented dialogue systems by designing a different scoring function according to the task.",
"In a competitive negotiation dialogue setting, two agents are negotiating with each other over a set of items.",
"We adopt the same setting as (Lewis et al., 2017), in which case items can be categorized into either a ball, a hat or a book.",
"Each agent is given the goal of the conversation (denoted by g ), which contains the initial values and the quantities of each of the three items.",
"Agents then negotiate to maximize the total value of their possessed items.",
"Agents are allowed to negotiate up to a maximum of 10 turns.",
"Scores will be granted Input Human 3x book value 2 3x hat value 1 1x ball value 1 Agent 3x book value 1 3x hat value 2 1x ball value 1 Human I'd like the books and the hats.",
"to agents based on the total value of the items if they reach an agreement.",
"If they choose not to agree, 0 score will be granted to both agents.",
"A competitive negotiation dialogue example played by human and agent could be found in Table 1.",
"We assess the robustness of a trained end-to-end negotiation agent used in (Lewis et al., 2017).",
"In the negotiation chatbot setting, agents first chat using natural language and then make a selection based on what they have chatted with.",
"We refer to the first phase as negotiation phase and the second phase as decision phase .",
"In the negotiation phase, conversation response at time t , x t is generated word by word based on chat history x 0",
"..t 1 and the goal of the conversation g .",
"The conversation model is controlled by a speaking module and tokens are randomly sampled from probability distribution p .",
"This process continues recursively until an end-of-sentence token (cid:104) EOS (cid:105) or selection token (cid:104) selection (cid:105) token is generated.",
"When (cid:104) EOS (cid:105) is encountered, the turn terminates and the conversation is handled to another agent.",
"When (cid:104) selection (cid:105) is encountered, the negotiation phase terminates and the negotiation will reach the decision phase.",
"In the decision phase, both agents will output a decision o based on a decision module probability distribution p (cid:48) .",
"Agents' decisions will be based on conversation history x 0 ...T up to the current time step T and the goal of the conversation g .",
"Here O is a set of all legitimate selections, which is defined to be a space of where each selection must be greater or equal than 0 and the sum of selections for the same item must be equal to its original quantity.",
"Since we only have a few items, it is possible to enumerate all the possibilities to get the set O .",
"Agents will then collect rewards (i.e. scores) from the environment (which will be 0 if they output conflicted decisions, e.g. the total number of items are different from the initial amount).",
"It is important to keep the agent producing sentences that are correct both grammatically and semantically and keeping them competitive at the same time.",
"Therefore, a common strategy is to train agents using supervised learning to learn natural language and to use reinforcement learning to optimize models' performance using on goal-oriented learning.",
"We measure two statistics score and agreement .",
"score is the average score for each agent (0-10).",
"agreement is the percentage of dialogues where both agents agreed on the same decision.",
"To measure the extent of success of our adversarial agent, we use advantage which is easy to compute directly from adversarial agent score minus target agent score, i.e. S adv S ori .",
"We first build our adversarial agent in black-box setting.",
"Black-box setting in goal-oriented dialogue system is defined where the target agent is unknown to the attacker, but it is possible to make queries to obtain the final decision made by the target agent.",
"To be noted, our aim is to test the robustness of the target agent.",
"Therefore, in the decision phase we let adversarial agent chooses the complementary of target agent's choice, so those two agents will always reach agreement.",
"The adversarial agent thus only has the speaking module and there is no decision network needed.",
"In this section we proposed two adversarial agents in the black-box setting.",
"Inspired by the procedure of goal-based reinforcement learning, we modified the reward function of our adversarial agent with the advantage instead of the score he got:",
"where S adv and S ori are adversarial agent score and target agent score respectively.",
"After a complete dialogue has been generated, we update adversarial agent's parameters based on the outcome of the negotiation.",
"To learn the adversarial agent's speaking network by reinforcement learning, we denote the subset of tokens generated by the adversarial agent as X adv .",
"In the completed dialogue, is the discount factor that rewards actions at the end of the dialogue more strongly, and is a running average of completed dialogue rewards so far.",
"We define the future reward R for an action x t X adv as follows: R ( x t ) = (cid:88) x t X adv T t ( r adv ) .",
"Then by a standard policy gradient algorithm, we could train our adversarial agent.",
"Note that this attack doesn't require the knowledge on the target agent's structure/weights, and the experimental results demonstrate significant attack performance over regular agents.",
"Transfer attack is a popular idea for attacking black-box models (Papernot et al., 2017).",
"In dialogue systems, we can also consider the following transfer process: a sentence that leads to low r adv in one dialogue might also lead to similar results in another dialogue.",
"To implement this idea, we first collect a list of last sentences spoken by the adversarial agent from dialogues with high reward, denoted by L .",
"In the conversations, we let our adversarial agent and the target agent negotiate n turns using the regular speaking module, and then plug in one sentence in L at the ( n + 1) -th turn.",
"Our experimental results show that this transfer attack does not work well in practice.",
"In the white-box setting, we assume that the attacker can access every part of the target agent, including the weights of both speaking and decision models, and the decision output in every dialogue.",
"Similar to the black-box attacks, we let the adversarial agent choose the complementary of target agent's choices to ensure 100% agreement.",
"By exploiting the knowledge of the target agent's model, white-box attacks can achieve much higher advantage than black-box attacks.",
"To begin with, we consider a simplified strategy where we first let our adversarial agent and the target agent negotiate n turns using regular speaking module.",
"For the ( n + 1) -th turn, we propose the following two ways to modify the output of regular speaking module to maximize the rewards of adversarial agent.",
"The first strategy is that the adversarial agent produces a sentence that forces the target agent to say (cid:104) selection (cid:105) .",
"The conversation will then enter the decision phase.",
"At the same time, the sentence produced by the adversarial agent should guide the target agent to make a bad selection that would be in favor of the adversarial agent.",
"We call this method reactive attack .",
"We formulate this strategy as an optimization problem.",
"Let x = x t n ...T 1 be the output sentence generated by adversarial agent in the speaking model after n -th turn.",
"Specifically, we define x 0 ...T 1 as all the tokens in the dialogue history before (cid:104) selection (cid:105) .",
"Z r ( x 0 ...T 1 ) indicates the logit layer outputs for predicting x T based on chat history x 0 ...T 1 in the speaking model.",
"Z o ( x 0 ...T ) indicates the logit layer outputs on conversation history x 0 ...T in the decision model.",
"Because we have a constraint to force the target agent to say the end-of-dialog token (cid:104) selection (cid:105) , we could format this constraint as [ Z r ( x 0 ...T 1 )] k sel max i (cid:54) = k sel [ Z r ( x 0 ...T 1 )] i 0 (5) where k sel is the corresponding index of end-of-dialog token (cid:104) selection (cid:105) .",
"At the same time, the score of output o should be in favor of our adversarial agent.",
"Assume the original decision output is o (cid:48) , L ( x ) = max { [ Z o ( x 0 ...T )] o (cid:48) max o O [ Z o ( x 0 ...T )] o , } (6) where O is the set of outputs that score of adversarial agent is greater than target agent i.e. O = { o O | S adv ( o ) > S ori ( o ) } , and 0 denotes the confidence margin parameter.",
"Note that x is a sub-sequence in x 0 ...T , so the right hand side of (6) is a function of x .",
"Combining these two equations together, we can get our final objective function: min x L ( x ) (7) s.t. [ Z r ( x 0 ...T 1 )] k sel max i (cid:54) = k sel [ Z r ( x 0 ...T 1 )] i 0 Eq (7) is a discrete optimization problem since x is the sentence produced by adversarial agent.",
"In this paper, we use a modified version of the greedy algorithm to optimize (7).",
"Although the original algorithm proposed in (Yang et al., 2018) only considered the unconstrained discrete problem, we show that the following slightly modified version performs well for solving (7).",
"At each iteration, we try to replace each word in x by the special token (cid:104) P AD (cid:105) .",
"A word that achieves minimal loss after swapping with (cid:104) P AD (cid:105) is then selected as the word to be replaced.",
"Then we try to replace the selected word with each word in the vocabulary.",
"For all the trials that satisfy the constraint, we choose the one with minimal loss and conduct the actual change.",
"We run this procedure iteratively to minimize (7).",
"In the experiments, we only replace two words in x to ensure the fluency and correctness of the adversarial sentences.",
"The other attack strategy is to produce a sentence to guide the target agent to lower its demand in the reply instead of making target agent say end-of-dialog token.",
"And after the reply from target agent, the adversarial agent speaks the end-of-dialogue token to enter the decision phase.",
"Similar to the reactive attack, adversarial agent's score should be greater than target agent's score in the decision phase.",
"Clearly, this strategy is more challenging than the previous one because there is an intermediate sentence spoken by the target agent before end-of-dialogue.",
"We call this preemptive attack.",
"Let x = x t n ...t nT be the output sentence generated by adversarial agent in the speaking model after turn n , where t n is the first word and t n T is the last word of the sentence.",
"Similarly, we could formally turn the intuition into optimization problem as follows: L ( x ) = max { [ Z o ( x 0 ...T )] o (cid:48) max o O [ Z o ( x 0 ...T )] o , } (8) Since we do not need to force target agent to say end-of-dialogue, the problem becomes an unconstrained discrete optimization problem.",
"We then Algorithm 1 Arbitrary turn attack algorithm Input: Target agent B, Input goal g Output: Dialogue x 0 ...T , Agent score S adv and S ori while (cid:104) selection (cid:105) is not generated do Set the loss L ( ) to be (7) Optimize the Loss L ( ) if L ( ) < 0 then Add the output into the dialogue else Set the loss L ( ) in to be (8) Optimize the Loss L ( ) if L ( ) < 0 then Add the output into the dialogue else if Transfer Attack then Randomly add a sentence in L (mali-cious sentences) into the dialogue.",
"directly apply the unconstrained version of greedy algorithm (Yang et al., 2018) to solve it.",
"While we could let our adversarial agent and the target agent negotiate n turns, it is still unknown which n should be chosen to get the best performance.",
"In other words, we aim to not only know what to say but also when to say to fool the target agent.",
"We propose two strategies to force target agent to make bad decisions at arbitrary turn.",
"The details are presented in Algorithm 1.",
"When it is the turn for adversarial agent to speak, we first try to apply reactive and preemptive attacks.",
"If both attacks couldn't make the loss L ( ) less than 0, there are two strategies: 1) just output the sentence generated by the regular speaking module (delayed at-tack), and 2) conduct transfer attack.",
"The comparisons can be found in the experiments.",
"Adversarial training is a popular method to improve the robustness of machine learning models (Miyato et al., 2016; Madry et al., 2018).",
"In this section, we use the agents designed in the previous sections to improve the robustness of the target agent.",
"In standard adversarial training for neural network models (Goodfellow et al., 2014; Jia and Liang, 2017), adversarial examples (images or sentences) generated by an attack are added to the training procedure to refine the model.",
"Since our setting is interactive and there is no fixed data used in selfplay, we should conduct training with adversarial agents instead of adversarial examples.",
"Moreover, as pointed out by (Jia and Liang, 2017), training on the examples generated by a single attack will lead to over-fitting to a particular attack, so we should do adversarial training iteratively.",
"Taking the black-box RL agent as an example, we consider the following min-max formulation: min ori { max adv S adv S ori } , (9) where ori is the weights for the target agent and adv is the weights for the adversarial black-box agent.",
"We solve (9) by the following alternating minimization procedure.",
"At each iteration, we first update the target agent ( ori ) using the standard policy gradient algorithm, and then use our RL algorithm in Section 4.1 to update adversarial agent to counter the target model.",
"We iteratively conduct these updates until convergence.",
"The experiments show that the adversarial training procedure can improve the robustness not only under RL attack but also under other white-box attacks.",
"We perform extensive experiments on evaluating the robustness of the negotiation agents developed in (Lewis et al., 2017).",
"Furthermore, we show that the robustness of negotiation agents can be significantly improved using the proposed adversarial training procedure.",
"Our codes are publicly available at https://github.com/cmhcbb/ Robustness-of-Dialogue-systems .",
"We use the code released by the authors (Lewis et al., 2017) and follow their instructions to get the target end-to-end negotiation agents.",
"More specifically, we first train the model on 5808 dialogues, based on 2236 unique scenarios in supervised way to imitate the actions of human users.",
"We call this model supervised model (SV agent).",
"Then we use reinforcement learning to conduct goal-oriented training in order to maximize the agent' reward.",
"The second model is called the reinforcement learning model (RL agent).",
"As a result, when doing selfplay between RL agent and SV agent, we could get RL agent with 5.86 perplexity, 89.57% agreement and 7.23 average score, while SV agent achieves 5.47 perplexity and 4.55 average score.",
"These numbers are similar to the numbers reported in (Lewis et al., 2017).",
"To evaluate the robustness of these agents, we conduct all the proposed attacks on both supervised model (SV agent) and reinforcement learning model (RL agent).",
"The successfulness of an attack is measured by average score advantage and positive advantage rate (PAR).",
"Average score advantage is defined by averaged adversarial agent's score minus average target agent's score.",
"The value is in the region of [ 10 , 10] since the total values are controlled to be 10 for both sides, and a larger advantage indicates a more successful attack.",
"Also, we define positive advantage rate (PAR) as the ratio of dialogues that the adversarial agent gets a higher score than the target agent.",
"We will see that most attacks developed in this paper will improve both average score advantage and PAR.",
"Note that this is the first work on attacking a goal-oriented dialogue agent so there is no previous method that could be included in the comparisons.",
"As introduced in Section 4, we have two black-box attacks: reinforcement learning attack (RL attack) and Transfer attack.",
"RL Attack.",
"In the reinforcement learning attack, we use a learning rate of 0.1, clip gradients above 1.0, and set the discount factor = 0 .",
"95 in (4).",
"We train the adversarial agent for 4 epochs on all scenarios.",
"From Table 2, we observe that with 100% agreement rate, our adversarial agent could gain 2.32 score advantage against the RL agent and 4.25 advantage against the SV agent.",
"Also, our agent achieves a relatively high positive advantage rate at 84.45% and 69.35% respectively.",
"We show some adversarial dialogues played by adversarial agent and target agent in Table 3.",
"It shows that RL agent is able to identify the weak point of target agent by saying take book you get rest, which could easily let the agent accept the deal and make a bad selection that is inconsistent with the context of dialogue.",
"Transfer attack.",
"In transfer attack, we first let our adversarial agent speak the sentence generated by the speaking model with target agent for 3 turns.",
"If the end-of-dialog token has never been mentioned, in the 4th turn, the adversarial agent speaks the sentence generated by our RL agent.",
"The detailed results are shown in Table 2.",
"We observe that the transfer attack is not successfulonly -0.13 and -1.189 score advantage are achieved.",
"We found that transferring sentences between dialogues is not successful because the item values and conversation histories are quite different between dialogues.",
"Force target agent to select at a fixed turn.",
"There are two types of algorithms (reactive attack and preemptive attack) introduced in Section 5.1.",
"The detailed results are shown in Table 2.",
"We observe that the reactive attack could achieve better results than black-box method with 5.40 score advantage against SV agent and 4.98 score advantage against RL agent.",
"On the other hand, preemptive attack is not that successfulit gets 2.81 advantage against SV agent and 0.77 score advantage against RL agent.",
"Furthermore, we have included some adversarial dialogues played by white-box adversarial agent and target agent in Table 4.",
"From these examples, we could see that white-box adversarial agent could generate the adversarial sentences, slightly unnatural however still readable, that could fool the target agent to make terrible decisions.",
"To determine when should we begin the attack, we design combinations of reactive attack, preemptive attack and transfer attack or delayed attack in Section 5.2.",
"Here, we conduct experiments to validate the effectiveness of these two attack combinations.",
"From Table 2, the combinations achieve better results than all the previous attacks.",
"The best result is achieved by the combination of reactive attack, preemptive attack and delayed attack vs SV agent vs RL agent Model PAR% Score(advantage) Agreement% PAR% Score(advantage) Agreement% RL model(w/o attack) 75.79 7.23 vs 4.55 (2.68) 89.57 44.70 5.05 vs 5.00 (0.05) 76.36 Transfer attack 44.43 6.41 vs 6.54 (-0.13) 100 36.10 5.65 vs 6.84 (-1.19) 100 RL attack 84.45 8.28 vs 4.03 (4.25) 100 69.35 7.11 vs 4.79 (2.32) 100 Reactive attack 87.00 8.83 vs 3.43 (5.40) 100 90.23 8.72 vs 3.77 (4.95) 100 Preemptive attack 71.86 7.76 vs 4.95 (2.81) 100 69.23 6.78 vs 6.01 (0.77) 100 RA+PA+DA 84.33 8.79 vs 2.96 (5.83) 100 86.93 8.73 vs 2.95 (5.78) 100 RA+PA+TA 83.12 8.67 vs 3.05 (5.62) 100 89.74 8.62 vs 2.92 (5.70) 100 Table 2: Negotiation task evaluation with different adversarial agent on 2000 randomly generated scenarios, against the supervised model and reinforcement learning model.",
"(RA+PA+DA), which gets 5.83 advantage against SV agent and 5.78 score advantage against RL agent, with very high positive advantage rates at 84.33% and 86.93% respectively.",
"We have included some adversarial dialogues played by this adversarial agent and the target agent in Table",
"5. We observe that with the delayed attack, the adversarial agent can decide when to attack , thus achieves much better performance than attacking at a fixed turn.",
"Using the algorithm proposed in Section 6, we conduct adversarial training using the black-box RL attack model.",
"The results are shown in Table",
"6. First, we observe that the adversarial trained model achieves much better performance against black-box RL attack; the advantage of RL attack drops from 2 .",
"32 to 1 .",
"8 .",
"Moreover, the model achieves consistently better performance against other white-box attacks.",
"For instance, the advantage of the strongest RA+PA+DA attack is reduced from 5.78 to 3.98.",
"RL agents are more robust than SV agents.",
"From Table 2, we could see that all the attack methods perform better when facing SV agents than RL agents.",
"It is because that SV agents only learn to mimic human's action and are trained only on human data.",
"The importance of arbitrary turns.",
"In reactive attack and preemptive attack, we begin our attack at the n -th turn and we set n = 2 in the experiments.",
"Here we show the results with different n in Table",
"7. We observe that the performance of white-box attacks are quite consistent with different choices of n .",
"This probably indicates that there the best n varies for different cases.",
"Therefore, if we could change the n from case to case adaptively, which is done by delayed attack, we could see a performance boost.",
"robustness.",
"We then try to investigate the robustness of the adversarial trained model.",
"We found that in the original model, it is easy for an attacker to find a sentence to quickly end the dialogue.",
"However, after adversarial training, it becomes much harder to find such sentences.",
"Moreover, although we only conduct adversarial training on black-box RL model, the adversarial trained model still achieves better performance against other white-box attacks.",
"This indicates that the adversarial trained model could probably detect the slight unnaturalness of those sentences and thus have a better reading comprehension ability.",
"In this paper, we develop adversarial agents to evaluate the robustness of a goal-oriented dialogue system.",
"Our experimental results show that the current NN-based models are not robust against our adversarial agents.",
"Furthermore, by iterative adversarial training using our black-box RL agent, we can significantly improve the robustness of the dialogue system."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"result",
"method",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result"
] |
[
"We introduce a curriculum learning approach to adapt generic neural machine translation models to a specific domain.",
"Samples are grouped by their similarities to the domain of interest and each group is fed to the training algorithm with a particular schedule.",
"This approach is simple to implement on top of any neural framework or architecture, and consistently outperforms both unadapted and adapted baselines in experiments with two distinct domains and two language pairs.",
"Neural machine translation (NMT) performance often drops when training and test domains do not match and when in-domain training data is scarce (Koehn and Knowles, 2017).",
"Tailoring the NMT system to each domain could improve performance, but unfortunately high-quality parallel data does not exist for all domains.",
"Domain adaptation techniques address this problem by exploiting diverse data sources to improve in-domain translation, including general domain data that does not match the domain of interest, and unlabeled domain data whose domain is unknown (e.g. webcrawl like Paracrawl).",
"One approach to exploit unlabeled-domain bitext is to apply data selection techniques (Moore and Lewis, 2010; Axelrod et al., 2011; Duh et al., 2013) to find bitext that are similar to in-domain data.",
"This selected data can additionally be combined with in-domain bitext and trained in a continued training framework, as shown in Figure",
"1. Continued training or fine-tuning (Luong et al., 2015; Freitag and Al-Onaizan, 2016; Chu et al., 2017) is an adaptation technique where a model is first trained on the large general domain data, then used as initialization of a new model which is further trained on in-domain bitext.",
"In our Generic Model Domain Specific Model General Domain Data Unlabeled-Domain Data In-Domain Data Continued Training Initialization Figure 1: Workflow of our domain adaptation system.",
"framework, the selected samples are concatenated with in-domain data, then used for continued training.",
"This effectively increases the in-domain training size with pseudo in-domain samples, and is helpful in continued training (Koehn et al., 2018).",
"A challenge with employing data selection in continued training is that there exists no clear-cut way to define whether a sample is sufficiently similar to in-domain data to be included.",
"In practice, one has to define a threshold based on similarity scores, and even so the continued training algorithm may be faced with samples of diverse similarities.",
"We introduce a new domain adaptation technique that addresses this challenge.",
"Inspired by curriculum learning (Bengio et al., 2009), we use the similarity scores given by data selection to rearrange the order of training samples, such that more similar examples are seen earlier and more frequently during training .",
"To the best of our knowledge, this is the first work applying curriculum learning to domain adaptation.",
"We demonstrate the effectiveness of our approach on TED Talks and patent abstracts for German-English and Russian-English pairs, using two distinct data selection methods, Moore-Lewis method (Moore and Lewis, 2010) and cynical data selection (Axelrod, 2017).",
"Results show that our approach consistently outperforms standard continued training, with up to 3.22 BLEU improvement.",
"Our S 4 error analysis (Irvine et al., 2013) reveal that this approach reduces a reasonable number of SENSE and SCORE errors.",
"Weinshall and Cohen (2018) provide guidelines for curriculum learning: A practical curriculum learning method should address two main questions: how to rank the training examples, and how to modify the sampling procedure based on this ranking.",
"For domain adaptation we choose to estimate the difficulty of a training sample based on its distance to the in-domain data, which can be quantified by existing data selection methods (Section 2.1).",
"For the sampling procedure, we adopt a probabilistic curriculum training (CL) strategy that takes advantage of the spirit of curriculum learning in a nondeterministic fashion without discarding the good practice of original standard training policy, like bucketing and mini-batching.",
"We adopt similarity metrics from prior work on data selection to score examples for curriculum learning.",
"Let I be an in-domain corpus, and N be a unlabeled-domain data set.",
"Data selection models rank sentences in N according to a domain similarity measure with respect to I , and choose top n samples from N by a cut-off threshold for further training purpose.",
"We examine two data selection methods, Moore-Lewis method (Moore and Lewis, 2010) and cynical data selection (Axelrod, 2017).",
"where HI ( s ) is the per-word cross-entropy of s according to a language model trained on I , and HN ( s ) is the per-word cross-entropy of s according to a language model trained on a random sample of N with roughly the same size as I .",
"A lower cross-entropy difference indicates that s is more like the in-domain data and less like the unlabeled-domain data.",
"Cynical Data Selection Iteratively select sentence s from N to construct a training corpus that would approximately model I .",
"At each iteration, each sentence is scored by the expected cross-entropy change from adding it to the already selected subset of N .",
"The selected sentence is the one which most decreases H n , the cross-entropy between previously selected n -sentence corpus and I .",
"We identify two general types of curriculum learning strategy.",
"The deterministic curriculum (c.f. Kocmi and Bojar (2017)) trains on a fixed order of samples based on their scores (e.g. easy-to-hard or more similar to less).",
"While simple to motivate, this may not always perform well because neural methods benefit from randomization in the minibatches and multiple epochs.",
"In contrast, the probabilistic curriculum (Bengio et al., 2009) works by dividing the training procedure into distinct phases.",
"Each phase creates a random sample from the entire pool of data, but earlier phases sample the easier or more similar sentence with higher",
"probability..",
"Since each phase can be viewed as creating a new training dataset, all the well-tested tricks of the trade for neural network optimization can be employed.",
"In this paper, we use the same probabilistic curriculum strategy and code base 1 as Zhang et al. (2018).",
"The main difference here is the application to domain adaptation.",
"The proposed strategy is summarized as follows: Sentences are first ranked by similarity scores and then distributed evenly into shards, such that each shard contains samples with similar similarity criteria values.",
"The training process is segmented into consecutive phases , where only a subset of shards are available for training.",
"During the first phase, only the easiest shard is presented.",
"When moving to the next phase, the training set will be increased by adding the second easiest shard into it, and so on.",
"Easy shards are those that are more similar to the in-domain data, as quantified by either Moore-Lewis or Cynical Data Selection.",
"The presentation order of samples is not deterministic.",
"(1) Shards within one curriculum phase are shuffled, so they are not necessarily visited by the order of similarity level during this phase.",
"(2) Samples within one shard are bucketed by length and batches are drawn randomly from buckets.",
"We evaluate on four domain adaptation tasks.",
"The code base is provided to ensure reproducibility.",
"2 3.1 Data and Setup General Domain Data We have two general domain datasets, Russian-English (ru) and German-English (de).",
"Both are a concatenation of OpenSubtitles2018 (Lison and Tiedemann, 2016) and WMT 2017 (Bojar et al., 2017), which contains data from several domains, e.g. parliamentary proceedings (Europarl, UN Parallel Corpus), political/economic news (news commentary, Rapid corpus), and web-crawled parallel corpus (Common Crawl, Yandex, Wikipedia titles).",
"We performed sentence length filtering (up to 80 words) after tokenization, ending up with 28 million sentence pairs for German and 51 million sentence pairs for Russian.",
"methods on two distinct domains per language pair: TED talks: data-split from Duh (2018).",
"Patents: from the World International Property Organization COPPA-V2 dataset (Junczys-Dowmunt et al., 2016).",
"We randomly sample 15k parallel sentences from the original corpora as our in-domain bitext.",
"3 We also have around 2k sentences of development and test data for TED and 3k for patent.",
"Unlabeled-domain Data For additional unlabeled-domain data, we use web-crawled bitext from the Paracrawl project.",
"4 We filter the data using the Zipporah cleaning tool (Xu and Koehn, 2017), with a threshold score of",
"1. After filtering, we have around 13.6 million Paracrawl sentences available for German-English and 3.7 million Paracrawl sentences available for Russian-English.",
"Using different data selection methods, we include up to the 4096k and 2048k sentence-pairs for our German and Russian experiments, respectively.",
"models (Sennrich et al., 2016) from general domain data.",
"The BPE models are trained separately for each language, and the number of BPE symbols is set to 30k.",
"We then apply the BPE models to in-domain and Paracrawl data, so that the parameters of the generic model can be applied as an initialization for continued training.",
"Once we have a converged generic NMT model, which is very expensive to train, we can adapt it to different domains, without building up a new vocabulary and retraining the model.",
"NMT Setup Our NMT models are developed in Sockeye 5 (Hieber et al., 2017).",
"The generic model and continued training model are trained with the same hyperparameters.",
"We use the seq2seq attention architecture (Bahdanau et al., 2015) with 2 LSTM layers for both encoder and decoder, and 512 hidden nodes in each layer.",
"The word embedding size is also set to 512.",
"Our models apply Adam (Kingma and Ba, 2014) as the optimizer, with an initial learning rate 0.0003.",
"The learning rate is multiplied by 0.7 whenever validation perplexity does not surpass the previous best in 8 checkpoints.",
"6 We use minibatches of 4096 words.",
"Training stops when the perplexity on the development set has not improved for 20 checkpoints (1000 updates/batches per check-point).",
"Domain Similarity Scoring Setup To get similarity scores, we build 5-gram language models on the source side 7 with modified Kneser-Ney smoothing using KenLM (Heafield, 2011).",
"Curriculum Learning Setup The number of batches in each curriculum phase is set to 1000.",
"We split the training data into 40 shards 8 , with all the 15k in-domain data in the first shard, and Paracrawl data split into the remaining 39 shards.",
"Our goal is to empirically test whether the proposed curriculum learning method improves translation quality in the continued training setup of 5",
"Figure",
"1. We compare two approaches to continued training: (1) the standard approach reads batches of in-domain and selected Paracrawl in random order; (2) the proposed curriculum learning approach reads these batches according to a schedule.",
"We run the comparison with two data selection methods, leading to four systems: std ML : standard continued training with Moore-Lewis scores CL ML : curriculum learning approach to continued training with Moore-Lewis scores std CDS : standard continued training with scores from Cynical Data Selection CL CDS : curriculum learning approach to continued training with scores from Cynical Data Selection For reference, we show results of the generic model ( GEN ), the model trained from scratch with in-domain data ( IN ), the model continued trained on in-domain data only ( IN CT ), and a standard continued training model using a random subset (rather than ML or CDS scores) of the concatenated in-domain and Paracrawl data ( std rand ).",
"Table 1 summarizes the key results, where we continue train on 15k in-domain samples and 4096k Paracrawl samples (for de) or 2048k Paracrawl samples (for ru):",
"The baseline BLEU scores confirm the need",
"for domain adaptation.",
"Using only the 15k in-domain samples alone (IN) is not suffi-cient to train a strong domain specific model, yielding BLEU scores as low as 2.53 on TED(de) and 1.76 on TED(ru).",
"The model trained with a large amount of general domain data (GEN) is a stronger baseline, with BLEU scores of 34.59 and 23.40.",
"Standard continued training is not robust to samples that are noisy and less similar to in-domain.",
"As expected, continued training on in-domain data (INCT) improves BLEU significantly, by up to 18.74 BLEU on patent(de).",
"However, when adding Paracrawl data, the standard continued training strategy (std rand, std ML, std CDS) consistently performs worse than IN CT.",
"Curriculum learning consistently improves BLEU score.",
"Ranking examples using Moore-Lewis (CL ML) and Cynical Data Selection (CL CDS) improve BLEU over their baselines (std ML and std CDS) by up to 3.22 BLEU points.",
"As an additional experiment, we report results on different amounts of Paracrawl data.",
"Figure 2 shows how the curriculum uses increasing amounts of Paracrawl better than standard continued training.",
"Standard continued training model hurts BLEU when too much Paracrawl data is added: for TED(de), there's a 1.94 BLEU drop when increasing CDS data from 64k to 4096k, and for patent(de), the decrease is 2.43 BLEU.",
"By contrast, the curriculum learning models achieve a BLEU score that is as good or better as the initial model, even after being trained on the most dissimilar examples.",
"This trend is clearest on the patent(ru) CL ML model, where the BLEU score consistently rises from 32.41 to 34.18.",
"The method used to score domain relevance has a different impact on the TED domain (top plots) and on the patent domain (bottom plots).",
"On the patent domain, which is more distant from Paracrawl, CDS significantly outperforms ML.",
"Replacing ML with CDS improve BLEU from 2.18 to 4.05 BLEU points for standard models and 2.20 to 4.25 BLEU points for curriculum learning models.",
"Interestingly, for patents, the Moore-Lewis method does not beat the random selection, even when curriculum learning is applied.",
"For example, at 64k selected sentences for patent(de), std rand gets 4.26 higher BLEU scores than CL ML.",
"By contrast on the TED domain, which is closer to Paracrawl, the Moore-Lewis method slightly outperforms cynical data selection.",
"Due to these differences, we suggest trying different data selection methods with curriculum learning on new tasks; a potential direction for future work may be a curriculum that considers multiple similarity scores jointly.",
"We compare our approach to other curriculum strategies.",
"CL reverse reverses the presenting order of the shards, so that shards containing less similar examples will be visited first, CL scrambled is a model that adopts the same training schedule as CL , but no data selection method and ranking is involved here Paracrawl data are evenly split and randomly assigned to 64 128 256 512 1024 2048 4096 Number of Paracrawl Sentences (*1000) 36.0 36.5 37.0 37.5 38.0 38.5 39.0 BLEU std CL CL_reverse CL_scrambled CL_noshuffle Figure 3: Comparison of various curriculum strategies on German-English TED corpora, where Moore-Lewis method is applied.",
"Results from Figure 3 show that CL outperforms CL reverse and CL noshuffle for 5 out of 7 cases and outperforms CL scrambled in 6 out of 7 cases.",
"This suggests that it is beneficial to train on examples that are closest to in-domain first and to use a probabilistic curriculum.",
"Analyzing the detailed difference between CL and CL reverse would be interesting future work.",
"One potential hypothesis why CL might help is that it first trains on a low-entropy subset of the data before moving on to the whole training set, which may have regularization effects.",
"9 Each point represents a model trained to convergence on the fixed amount of in-domain and ParaCrawl data whose amount is specified by the x-axis.",
"Learning curves (Figure 4) further illustrate the advantage of our method.",
"Continued training on in-domain data only starts from a strong initialization (thanks to pre-training on large general domain data) but heavily oscillates over training without reaching the initial performance.",
"This behavior may be due to the sparsity of the TED data: the small randomly sampled training set may not represent the development and test data well.",
"Std ML shows opposite behavior to IN CT: it starts from a lower initial performance, and then gradually improves to a level comparable to IN CT.",
"Std rand behaves similarly to std MLin other words, uniformly sampling from Paracrawl drags the initial performance down without helping with the final performance.",
"Compared to all above, the curriculum learning models start from a high initial performance, suffer much less oscillation than IN CT, and gradually achieve the highest performance.",
"10 4.3 Impact of Curriculum Learning on Lexical Choice: S 4 Analysis How do translations improve when using curriculum learning?",
"We characterize the impact of curriculum learning on lexical translation errors using the S 4 taxonomy of domain change errors introduced by Irvine et al. (2013) for phrase-based machine translation: (1) SEEN: incorrect translation for a source word that has never been seen in the training corpus; (2) SENSE: incorrect translation for a previously seen source word, whose correct translation (sense) has never been seen in the training corpus; (3) SCORE: a score error is made when the source word and its correct translation are both observed in training data, but the incorrect translation is scored higher than the correct alternative ; and (4) SEARCH: an error caused by pruning in beam search 11 .",
"We extend this taxonomy to neural machine translation.",
"As the unit of S 4 analysis is word alignment between a source word and a reference target word, we first run fast-align (Dyer et al., 2013) to get the source-target word alignments.",
"After this, we follow the algorithm shown in Appendix C to give a summary of S 4 errors on the model's translation of test set.",
"10 When converged, IN CT does not outperform CL ML.",
"11 We will only focus on the first three error categories in this paper for the purpose of model comparison.",
"Figure 5 shows the word translation results for the test set of German-English TED.",
"Most of the errors are SCORE errors, while SEEN and SENSE errors are relatively rare.",
"Curriculum learning significantly improves the adapted NMT systems at the word level with 4096k Paracrawl data selected by CDS, curriculum continued training model can translate 554 more words correctly than the standard continued training model.",
"This improvement mainly happens in SCORE errors: 1.75% of SCORE errors are corrected.",
"SEEN and SENSE errors are also reduced by 0.02% and 0.026%, respectively.",
"But overall, CL does not help much on SEEN errors.",
"We characterize the sentences chosen by different data selection methods, to understand their effect on adaptation as observed in Section 3.3.",
"Selected Sentences Overlap For each domain in German-English, we compute the overlap between the top n ML and CDS Paracrawl sentences.",
"The overlap is as low as 3.69% for the top 64k sentences in the TED domain, and 8.43% for the patent domain.",
"Even in the top 4096k sentences, there are still 46.25% and 65.40% different ones in TED and patent domain respectively.",
"See Table 2 for examples of selected sentences.",
"Average Sentence Length The ML score prefers longer sentences and is more correlated with sentence length (See Figure 6) the curve TED ML is near linear, which might be a side-effect of sentence-length normalization.",
"CDS produces sentences that better match the average sentence length in the in-domain corpus, which was also observed in Santamara and Axelrod (2017).",
"Out-of-Vocabulary Words We count out-of-vocabulary (OOV) tokens in in-domain corpus based on the vocabulary of selected unlabeled-domain data (Figure 7).",
"The CDS subsets cover in-domain vocabulary better than ML subsets as expected, since CDS is based on vocabulary coverage.",
"relative frequencies compare in the in-domain and selected Paracrawl data?",
"We measure the difference of unigram distributions from two corpora by Hellinger distance , which is defined as Equation 2 when the probability distribution is discrete, where P and Q are the unigram distributions for the source side of in-domain and Paracrawl.",
"V is the vocabulary size.",
"12 HHD ( P, Q ) = 1 2 (cid:118)(cid:117)(cid:117)(cid:116) V (cid:88) i =1 ( p i q i ) 2 .",
"the in-domain vocabulary distribution than CDS.",
"With respect to the OOV rate and unigram distribution, patent is more distant from the Paracrawl data than TED is.",
"Figure 2 suggests that CDS dominates ML for distant domains such as Patent, while ML can do slightly better than CDS for domains that are not that distant such as TED.",
"Curriculum learning has shown its potential to improve sample efficiency for neural models (Graves et al., 2017; Weinshall and Cohen, 2018) by guiding the order of presented samples, usually from easier-to-learn samples to difficult samples.",
"Although there is no single criterion to measure difficulty for general neural machine translation tasks (Kocmi and Bojar, 2017; Wang et al., 2018; Zhang et al., 2018; Kumar et al., 2019; Platanios et al., 2019), for the domain adaptation scenario, we measure difficulty based on the distance from in-domain data.",
"Compared to previous work, our application of curriculum learning mainly focuses on improvements on translation quality without consideration of convergence speed.",
"12 In Figure 8, for the purpose of fair comparison, each distribution is defined on the same vocabulary, consisting of the source side vocabulary of TED, patent and Paracrawl data.",
"Chu and Wang (2018) surveyed recent domain adaptation methods for NMT.",
"In their taxonomy, our workflow in Figure 1 can be considered a hybrid that uses both data-centric and model-centric techniques due to the use of additional unlabeled-domain data, with a modified training procedure based for continued training.",
"For data-centric domain adaptation methods, our curriculum learning approach has connections to instance weighting.",
"In our work, the presentation of certain examples at specific training phases is equivalent to up-weighting those examples and down-weight the others at that time.",
"Weights of similar samples and less similar ones are adjusted dynamically during the training of NMT models based on the curriculum training strategy.",
"In NMT, instance weighting is usually implemented by modifying the objective function (Chen and Huang, 2016; Wang et al., 2017; Chen et al., 2017).",
"In statistical machine translation, Matsoukas et al. (2009) extract features from sentences to capture their domains and then use a clas-sifier to map features to sentence weights.",
"Foster et al. extend this method by weighting at the level of phrase pairs.",
"Shah et al. (2010) use resampling to weight corpora and alignments.",
"Mansour and Ney (2012) focus on sentence-level weighting for phrase extraction.",
"Zhou et al. (2015) weight examples based on their word distributions.",
"For model-centric domain adaptation methods, our work is related to van der Wees et al. (2017).",
"They adopt gradual fine-tuning, which does the opposite of our method: training starts from the whole dataset, and the training set gradually decreases by removing less similar sentences.",
"Wang et al. (2018) use a similar approach, where the NMT model is trained on progressively noise-reduced data batches.",
"However, such schedules have the risk of wasting computation on non-relevant data, especially when most of the Paracrawl data is not similar to the target domain.",
"We introduced a curriculum learning approach to adapt neural machine translation models to new domains.",
"Our approach first ranks unlabeled-domain training samples based on their similarity to in-domain data, and then adopts a probabilistic curriculum learning strategy so that more similar samples are used earlier and more frequently during training.",
"We show the effectiveness of our method on four tasks.",
"Results show that curriculum learning models can improve over the standard continued training model by up to 3.22 BLEU points and can take better advantage of distant and noisy data.",
"According to our S 4 analysis of lexical choice errors, this improvement is mainly due to better scoring of words that acquire a new SENSE or have a different SCORE distribution in the new domain.",
"Our extensive empirical analysis suggests that this approach is effective for several reasons: (1) It provides a robust way to augment the training data with samples that have different levels of similarity to the in-domain data.",
"Unlabeled-domain data such as webcrawls naturally have a diverse set of sentences, and the probabilistic curriculum allows us to exploit as much diversity as possible.",
"(2) It implements the intuition that samples more similar to in-domain data are seen earlier and more frequently; when adding a new shard into the training set, the previously visited shards are still used, so the model will not forget what it just learned.",
"(3) It builds on a strong continued training baseline, which continues on in-domain data.",
"(4) The method implements best practices that have shown to be helpful in NMT, e.g. bucketing, mini-batching, and data shuffling.",
"For future work, it would be interesting to measure how curriculum learning models perform on the general domain test set (rather than the in-domain test set we focus on in this work); do they suffer more or less from catastrophic forgetting (Goodfellow et al., 2014; Kirkpatrick et al., 2017; Khayrallah et al., 2018; Thompson et al., 2019)?",
"This work is supported in part by a AWS Machine Learning Research Award and a grant from the Office of the Director of National Intelligence, Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9115.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.",
"We thank the organizers and participants of the 2018 Machine Translation Marathon for providing a productive environment to start this project.",
"We also thank Amittai Axelrod, Hongyuan Mei and all the team members of the JHU SCALE 2018 workshop for helpful discussions."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"result",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"objective",
"objective",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other"
] |
[
"Abstract Grounding natural language instructions on the web to perform previously unseen tasks enables accessibility and automation.",
"We introduce a task and dataset to train AI agents from open-domain, step-by-step instructions originally written for people.",
"We build RUSS (Rapid Universal Support Service) to tackle this problem.",
"RUSS consists of two models: First, a BERT-LSTM with pointers parses instructions to ThingTalk, a domain-specific language we design for grounding natural language on the web.",
"Then, a grounding model retrieves the unique IDs of any webpage elements requested in ThingTalk.",
"RUSS may interact with the user through a dialogue (e.g. ask for an address) or execute a web operation (e.g. click a button) inside the web runtime.",
"To augment training, we synthesize natural language instructions mapped to ThingTalk.",
"Our dataset consists of 80 different customer service problems from help websites, with a total of 741 step-by-step instructions and their corresponding actions.",
"RUSS achieves 76.7% end-to-end accuracy predicting agent actions from single instructions.",
"It outperforms state-of-the-art models that directly map instructions to actions without ThingTalk.",
"Our user study shows that RUSS is preferred by actual users over web navigation.",
"Grounding natural language is a key to building robots and AI agents (Chen and Mooney, 2011) that interact seamlessly with people.",
"Besides grounding tasks visually (Mirowski et al., 2018; Venugopalan et al., 2015), future AI agents must be able to ground language and execute actions on the web.",
"We build a general-purpose, interactive agent to master tasks from open-domain natural language instructions on websites.",
"Conversational agents capable of providing universal access to the web through a language interface are an important step towards achieving information equity.",
"These agents empower those who are visually impaired or situationally preoccupied (e.g. driving) to obtain web-based knowledge and services for which they would otherwise require a laptop or mobile device for (Sarsenbayeva, 2018).",
"Already, virtual assistants and call centers demonstrate a large number of scenarios where language interfaces backed by web backends are required by companies and users.",
"However, unlike virtual assistants, web agents like RUSS are universal, navigating the Web, interacting with users, and bypassing the need for domain-specific APIs.",
"On average over 60 % of Americans have contacted customer service in a month (Statista Research Department, 2019).",
"A call center manager might instruct its agents to do the following to help a customer through a password reset: go to pass-wordreset.com; ask the user for their desired new password; click the reset button .",
"As the agent performs the instructions on the web behind-the-scenes, the user is read information or asked questions periodically over a conversational interface (such as a phone).",
"Our approach, RUSS (Figure 1), trains an agent that masters any web task specified from open-domain instructions.",
"To do so, we design a domain-specific language (DSL) for grounding on the web and implement it as a subset of the ThingTalk programming language (Campagna et al., 2019).",
"Each natural language instruction maps to one of six agent actions that interact with users or operate on webpages.",
"Actions that operate on the web are passed element IDs that are retrieved from high-level user language by grounding its corresponding ThingTalk on the active webpage.",
"In the following, we use ThingTalk to refer to our subset taylored to web operations, where not ambiguous.",
"We break down the problem into two components: (1) a semantic parser that takes single-step natural language instructions and maps to ThingTalk statements using a BERT-LSTM pointer network, and (2) a grounding model that takes ThingTalk and retrieves an element ID on the active webpage where needed.",
"The contributions of this work include: 1. Task : The new problem of building an interactive web agent capable of mastering tasks from open-domain natural language instructions.",
"2. RUSS : A fully functioning agent that services user support requests from natural language instructions.",
"RUSS consists of a semantic parser, a grounding model, and a runtime.",
"We release RUSS as an open-source repository 1 3. ThingTalk : A typed DSL that grounds natural language instructions on the web.",
"ThingTalk is designed to be an expressive 1 https://github.com/xnancy/russ target for natural language semantic parsing, and amenable to training data synthesis.",
"4. RUSS Dataset :",
"a) Evaluation: a collection of 741 real-world step-by-step natural language instructions (raw and annotated) from the open web, and for each: its corresponding webpage DOM, ground-truth ThingTalk, and ground-truth actions; and",
"b) Synthetic: a synthetic dataset of 1.5M natural language instructions mapped to ThingTalk.",
"5. Evaluation of RUSS : 76.7 % accuracy on our RUSS evaluation dataset.",
"Our semantic parser maps natural language instructions to ThingTalk at 85% accuracy and our grounding model achieves 75% accuracy in resolving web element descriptions.",
"A user study of RUSS shows preference of the natural language interface over existing Web UIs.",
"Grounding in the Visual and Physical Worlds (Robotics) .",
"Grounding language in both the physical world (Chen and Mooney, 2011) and in images and videos ((Venugopalan et al., 2015), (Hen-dricks et al., 2018)) through systems like visual question-answering (Antol et al., 2015) have been extensively explored.",
"For example, Thomason et al. (2016) describe the game I Spy where human and robot take turns describing one object among several in a physical environment, requiring grounding of natural language to the physical world, and robot-human dialogues are explored in (Thomason et al., 2019).",
"Previous work has proposed adaptive language interfaces for robots in dyanmic settings such as (Liu et al., 2018), (Ito et al., 2020; Liu et al., 2018), (Karamcheti et al., 2020), and (Kim et al., 2020).",
"Other work builds physical world agents that operate through sequential actions (Chen and Mooney, 2011; Misra et al., 2017; Mirowski et al., 2018).",
"Natural Language Digital Interfaces .",
"An intelligent automated software assistant that collaborates with humans to complete tasks was first introduced in (Allen et al., 2007).",
"Since then, identifying UI components from natural language commands has been an important area of research in grounding, with prior work investigating approaches to map natural language instructions to mobile interfaces such as Android (Li et al., 2020) and Adobe photo editing GUIs (Manuvinakurike et al., 2018).",
"Earlier work mapped natural lan-Agent Action Description @goto( url ) Navigate to the given URL @enter( element_id , dict_key ) Find the closest match to the given dictionary key and enter its value in the given input element @click( element_id ) Click on the given element @read( element_id ) Read the content of the given element to the user @say( message ) Read the given message to the user @ask( dict_key ) Ask the user for the value of a dictionary key Grounding Function Description @retrieve( descr , type , loc , above , below , right_of , left_of ) : element_id Retrieves the elements matching the descriptors, returns an element_id .",
"More recently, Pasu-pat et al. (2018) attempted to map natural language commands written by Amazon Mechanical Turkers to web elements (without actions).",
"Unlike prior research, our work focuses on a new domain of parsing natural language instructions into executable actions on the web, where instead of mapping directly to elements using a neural model, we semantically parse natural language instructions to formal actions that support web navigation as well as user interactivity.",
"Dialogue Agents for The Web .",
"Other web-based dialogue agents are developed through single-use heuristics and more recently through programming-by-demonstration (PBD) tools.",
"This approach allows users and developers to author programs that operate on the web and invoke those programs in natural language (Li et al., 2017; Li and Riva, 2018; Fischer et al., 2020; Sarmah et al., 2020).",
"CoScripter (Leshed et al., 2008) additionally allows the user to edit the demonstration in natural language, and parses a limited natural language into executable form.",
"While related in end goal, our work does not require user demonstration and can operate using existing real-world instructions.",
"We note though that the WebLang intermediate representation and our grounding model can be used to improve the robustness of PBD systems as well.",
"Given a set of natural language instructions S = ( i 1 , . . . , i n ) and a starting web page, our task is to construct an agent that follows the instructions through a series of action A = ( a 1 , . . . , a n ) .",
"Actions include web navigation and end-user interaction in order to obtain necessary information.",
"Surveying online customer service tasks, 6 action operations were identified as necessary for agents: open a URL page, enter text, click on buttons, say something to the user, read the results to the user, and ask user for some information.",
"Details are described in Table 1, where elements on a web page are assumed to be given unique element IDs.",
"RUSS is trained to execute tasks by grounding natural language instructions on the web.",
"The modular design of RUSS , with separate semantic parser and grounding model, is motivated by the high cost of training data acquisition, and the ability to improve each component independently.",
"We first describe ThingTalk, then the three components of Russ: the semantic parser model, the grounding model, and the runtime.",
"ThingTalk is designed to be (1) robust to open-domain natural language, (2) a suitable target for semantic parsing from natural language, and (3) trainable with only synthetic data.",
"The primitives in ThingTalk include all the agent actions and a grounding function @retrieve (Table 1).",
"The latter is informed by the descriptions in the instructions we found in the wild.",
"The input features accepted by @retrieve are: descr : textual description of the element type : type of element (button, input box, paragraph, header, etc.) loc : absolute position of the element on the page above/below/... : position of the element relative to another; above, below, right, and left.",
"positional in design.",
"A ThingTalk program is a sequence of statements with syntax [ r ] a , where r is the retrieve operation and a is an agent action.",
"@retrieve returns an element_id that is passed to @click (to click on the element), @read (to read the text in the element to the user), or @enter (to enter text in the element).",
"For agent actions that require an element id, we call the sequence of @retrieve functions used to obtain the final element id used in the agent action the query .",
"See Figure 1 for sample ThingTalk parses from natural language instructions.",
"The orange ThingTalk parse demonstrates a query with 2 @retrieve functions.",
"To translate natural language instructions into ThingTalk, we use the previously proposed BERT-LSTM model (Xu et al., 2020).",
"BERT-LSTM is an encoder-decoder network that uses a pre-trained BERT encoder (Devlin et al., 2019) and LSTM (Hochreiter and Schmidhuber, 1997) decoder with a pointer-generator (See et al., 2017; Paulus et al., 2018).",
"The architecture is shown in Fig. 2. The model is trained to encode natural language utterances and produce the ThingTalk code token-by-token.",
"The pointer network in the decoder allows the model to predict out-of-vocabulary words by copying from the input utterances.",
"We preprocess the natural language by performing entity extraction , where entity strings are mapped to placeholder tokens (URL, LOC, TYPE), and the strings are substituted back into the ThingTalk code after parsing with the placeholder tokens.",
"This resolves errors related to long URLs being broken into tokens that are not always copied to ThingTalk together and helps disambiguate important input features.",
"For example: \"Click the button on the top of the amazon.com Instruction : Enter the user's order number in the text field that says order number DOM : element_id: 1, type = \"body\" element_id: 2, type = \"h1\", text = \"Your Orders\" element_id: 3, type = \"form\" . . . element_id: 48, type = \"label\", text = \"order number\" element_id: 49, type = \"input\" . . . ThingTalk : @retrieve ( description = order number , type = input ) @enter ( text = order_number , element = id ) Action : @enter ( text = order_number , element = 49) Figure 3: Representation of an instruction in RUSS page\" maps to \"Click the TYPE on the LOC of the URL page\".",
"We use a simple set of heuristics to identify the entity strings for each placeholder token, such as the presence of a",
"'www.',",
"'.com', 'http' substring to indicate a URL entity.",
"The webpage is modeled using the Document Object Model (DOM), which is a hierarchical representation of all elements in the page.",
"Our DOM representation records element features such as the following for each element: inner text content of the element HTML id , tag , class hidden state (True/False if element is visible on the webpage) height/width of the element left/right/top/bottom coords of the element list of child elements in the DOM.",
"RUSS 's grounding model grounds a ThingTalk @retrieve function by mapping it to an element ID.",
"The input features in the @retrieve function are mapped against scores derived from the element features in the DOM to identify the best match.",
"The grounding model consists of the following steps.",
"It filters elements by their type and absolute location.",
"Next it handles relative positioning by identifying those elements with the right relational context to, and not too far away from, the given element's coordinates.",
"It passes the text of the remaining candidates through a Sentence-BERT (Reimers and Gurevych, 2019) neural network and computes the cosine similarities of their embeddings with the embedding of the input text description.",
"To execute the grounded ThingTalk program, RUSS starts a new automated Chrome session for each task and uses Puppeteer to automate web actions in the browser.",
"RUSS uses a Google Voice API to implement actions involving user interactions (@say, @ask, or @read).",
"For @ask actions, RUSS uses a preprogrammed dialogue to ask the user for a dictionary key (such as name), verifies the dictionary key is a valid string, and stores the value given by the user in a user's dictionary under that key.",
"In @enter actions, we retrieve information to be entered by finding its closest match among the user's dictionary keys.",
"This paper contributes two detasets, the RUSS Evaluation Dataset with real-world instructions and the RUSS Synthetic Dataset for training semantic parsers.",
"The RUSS Evaluation Dataset consists of real-world tasks from customer service help centers of popular online companies.",
"To make our task-open domain, the online help centers we use span a diverse range of domains including music, email, online retail, software applications, and more.",
"For each instruction in a task, the dataset includes: the English instruction in natural language as it appears in the original website, and the human-edited version of the instruction the DOM of the web page where the instruction can be executed, with the element features associated with each element the ThingTalk code corresponding to the instruction the grounded action of the instruction To collect the RUSS Evaluation dataset, we acquire a list of Top 100 visited websites and locate tasks that offer line-by-line help instructions from those.",
"An author of the paper walked through each task, performed the actions as instructed, scraped the webpage in the browser, and annotated the instruction with the corresponding ThingTalk code.",
"Steps found missing from the instructions were inserted.",
"If an instruction mapped to several actions, the text was broken into individual Figure 4: Lengths of instructions in the RUSS Evaluation Dataset Figure 5: Distribution of actions in the RUSS Evaluation Dataset.",
"instructions.",
"Note that the human worker did not participate in the design of ThingTalk; they were asked to write instructions as if they were teaching another human step-by-step.",
"We collected a total of 80 tasks and 741 lines of instructions from 22 different online help centers.",
"The dataset is split into a dev set and a test set, with 304 instructions from 30 tasks in the dev set and 437 instructions from 50 tasks in the test set.",
"The RUSS Evaluation dataset is not used for training.",
"On average, instructions in RUSS contain 9.6 tokens (Fig. 4), significantly longer than the crowdsourced web instructions in PhraseNode which average 4.1 tokens.",
"The three most common actions in the dataset are click, ask and enter (Fig. 5).",
"61.4% of the natural-language instructions require retrieving an element from the webpage (click, enter, read).",
"Table 2 illustrates different types of reasoning supported by the @re-trieve descriptors and their frequency in the RUSS Evaluation Dataset.",
"Lastly, 76 of the 455 element queries use two @retrieve functions, with the rest all just using one, and 53.7%, 42.7%, and 3.6% of the @retrieve functions have 1, 2, and 3 descriptors, respectively (Fig. 6).",
"While the language has just 7 core actions, the combinatorial space of possible actions and web elements is much larger on the order of 1000s ThingTalk Includes: (@retrieve feature) Description Frequency Type reasoning (type) Requires specific HTML type (e.g. button, checkbox) 29.0% Input target (type = input) Requires target element is a text input 25.0% Relational reasoning (below/above/left of...) References neighboring features of the element 10.3% Spatial reasoning (location) References element location on the webpage 4.6% No web element (No @retrieve) No element (operation is @ask / @goto / @say) 38.6% Table 2: Subset of reasoning types (with the @retrieve input feature used to indicate it) supported by ThingTalk and their frequency in the RUSS dataset.",
"of possible combinations per instruction.",
"On average the DOMs of the webpages contain 689 web elements each.",
"The total vocabulary size of the Evaluation Dataset found in the wild is 684 words.",
"We find that at least one of the most frequent 300 words in the Evaluation vocabulary is present in >50% of the Evaluation Dataset instructions.",
"There are also many domain-specific words throughout the instructions.",
"Labeling large numbers of instructions in ThingTalk for training is time consuming and demands expertise.",
"To address this, we use a typed template-based synthesis method to generate our training data.",
"We write templates for each ThingTalk primitive and common combinations thereof.",
"We also scrape a large dataset of naturally occurring DOM element text, webpage URLs, and phrases that are likely to be variable names to use for each parameter.",
"The synthesizer compositionally expands the templates and sample values from the scraped dataset to construct a large training set of instructions mapped to ThingTalk automatically.",
"We generate hundreds of different types of natural language templates which are combined to create a Synthetic Dataset with 1.5M training samples.",
"This composition method creates roughly 840 distinct templates.",
"To promote generalizability of our model, the total vocabulary size of the Synthetic corpus is large compared to the evaluation vocabulary size at Model Accuracy (test) RUSS (1.5M training parses) 87.0% Ablations Accuracy (dev) RUSS (1.5M training parses) 88.2% entity extraction 77.6% 1M training parses, entity extraction 70.0% Table 3: Evaluation of Semantic Parsing Model (trained on 1.5M parses) on RUSS Evaluation test set.",
"An example of a simple template is: At the loc of the page, @click the button that says descr which is mapped to the ThingTalk: @retrieve( descr = descr , loc = loc ) @click (element = id) 5 Evaluation RUSS achieves 76.7% overall accuracy on the Evaluation Dataset, even though all of RUSS , including the semantic parser is trained with only synthetic data.",
"We perform 3 experiments to evaluate the individual components and the system as a whole: 1) Accuracy evaluation of RUSS 's Parsing Model with ablation studies.",
"2) Accuracy evaluation and baseline comparisons of RUSS 's Grounding Model.",
"3) User study evaluating RUSS 's ability to master 5 tasks on-the-job.",
"We test usability and efficacy of RUSS compared with existing customer service help websites.",
"Our first experiment evaluates the accuracy of our semantic parser on the RUSS Evaluation dataset.",
"We measure Exact Match Accuracy : a parse is considered correct only if it matches the gold annotation token by token.",
"The results are shown in Table 3. The parser obtains 87.0% accuracy on the test set.",
"Despite using no real-world training data, the semantic parser achieves high accuracy on the challenging evaluation set.",
"It achieves an accuracy of 81.4% for instructions involving web elements, and 94.6% for the rest.",
"This suggests the semantic parser can handle both types of instructions with high accuracy, especially instructions that parse to user interactions (no web element).",
"We perform an ablation study on the RUSS Evaluation dev set as seen in Table 3. RUSS achieves 88.2% accuracy on the dev set.",
"The entity extraction technique where string entities are replaced with placeholders during training, as discussed in Section 3.2, contributes 10.6% improvement in accuracy.",
"Training without this pre-processing step and with only 500K parses will reduce the accuracy further by 7.6%.",
"This suggests that it is important to have a large synthetic training data set.",
"With an effective semantic parser to ThingTalk, we next measure the grounding accuracy: the percent of correctly identified element_ids from the 252 natural language commands referring to web elements in the RUSS test set.",
"As shown in Table 4, RUSS achieves an accuracy of 63.6%.",
"81.4% of the instructions are parsed correctly, and 77.9% of the correct parses are grounded accurately.",
"Had the semantic parser been correct 100% of the time, the Grounding Model would achieve an accuracy of 73.0%.",
"The semantic parser is more likely to correctly parse simple instructions such as \"click sign in\", which are also generally easier for the Grounding Model, explaining the delta between 77.9% and 73.0%.",
"We create an End-to-end Baseline model to compare against the 2-step approach of RUSS .",
"Here, we represent web elements using RUSS 's feature elements as before.",
"However, we do not parse the natural language sentences into their input features in RUSS , but is left intact as input to Reasoning RUSS PhraseNode Type 67.8% 61.5% Input 75.6% 60.4% Relational 70.0% 53.5% Spatial 36.7% 30.3% Table 5: Grounding Accuracy Comparison of RUSS and PhraseNode by Reasoning type on the RUSS Evaluation test set.",
"Sentence-Bert to compute its embedding.",
"Like Section 4.3, the element sharing the closest embedding with the input sentence is returned.",
"This end-to-end baseline model performs with 12 .",
"6% less accuracy than RUSS , illustrating the benefits of using a semantic parser.",
"To compare our grounding model with state-of-the-art results, we also replicate the best performing embedding model from (Pasupat et al., 2018), which we reference as PhraseNode.",
"The webpage features used as inputs in PhraseNode are a subset of our representation.",
"PhraseNode achieves an accuracy of 46.5%, which is 4.6% worse than our Baseline and 17.2% lower than RUSS .",
"We show that the combination of a high-performance semantic parser and a well-tuned grounding model can outperform the best end-to-end neural models for grounding on the web.",
"The entire one-time process for training RUSS takes approximately 7 hours on an NVIDIA Tesla V100.",
"RUSS can perform a new task on-the-job by running the instructions through the semantic parser in less than 1 minute.",
"We analyze how well RUSS and PhraseNode perform for sentences in the Evaluation Set requiring different types of reasoning (Table 5).",
"Russ outperforms the state-of-the-art PhraseNode (Pa-supat et al., 2018) for all the reasoning types.",
"It performs well on grounding tasks that involve type, input, and relational reasoning.",
"Evaluation of the spatial reasoning instructions revealed that many referenced image features (e.g. click the hamburger menu icon), which is not supported by RUSS .",
"The results show that ThingTalk is simple enough to be generated by a neural language model, while comprehensive enough to express the wide range of open-domain natural language instructions for web tasks.",
"efits from added reasoning in instructions that constrains the potential set of element candidates (e.g. the element must be an input).",
"Webpages commonly have thousands of elements and the probability of matching the right element increases with constraints.",
"Of the 741 instructions in the RUSS dataset, 6 contain attributes that are not well expressed in ThingTalk.",
"For example, select the user's birth month in the month drop down is not parsed correctly because ThingTalk does not have a notion of selecting an element in a menu.",
"This feature will be added in the future.",
"Another source of errors lies in how webpages are constructed.",
"Important attributes needed for grounding can be hidden behind classes.",
"For example, an element may be labeled as Click here, but the text is not present in the DOM text attribute and instead obscured behind a site-specific class name such as \"next-page-button\".",
"Grounding techniques on visual data can be helpful in resolving this class of problems.",
"The goal of our user study is to evaluate the end-to-end feasibility of RUSS on open-domain instructions from real customer service websites, and evaluate how users respond to RUSS .",
"This is a small-scale study with promising early results, but can benefit from further user studies on larger populations.",
"We recruited 12 participants who were asked to complete 5 customer-support tasks (Table 6), cho-sen from popular websites: Amazon, Spotify, pinterest, Google, and Walmart, with both RUSS and the browser.",
"For all tasks, users were given a fake persona (a set of credentials such as email, password, gift card code, etc) to use when interacting with the agent.",
"The study was approved by our IRB and participants were compensated.",
"The participants in our study ranged from ages 21 to 68 years old, with an average age of 36 years old, a 50/50 male/female ratio, and varied technical sophistication.",
"To reduce learning effects, Figure 7: Average number of user interactions via utterance or click (left); average time taken to complete tasks in seconds (left) we used Latin Square Balancing (Bradley, 1958) to ensure that both the web and RUSS trials of each site were performed first half the time.",
"We record users' time to perform each task, number of turns (in RUSS ) or clicks (on the web) required to achieve each task, and gave each participant an exit survey containing qualitative assessments.",
"Participants were able to complete 85% of the tasks on their own on the web and 98% of tasks with the help of RUSS .",
"Those who did not fin-ish their task either gave up or failed to complete the task within 5 minutes.",
"The time it took users to accomplish each task was similar for the Web and RUSS (Fig. 7), though RUSS was significantly faster for Task 2, a more complex task users said they were unfamiliar with.",
"This seems to indicate that RUSS is more favorable for unfamiliar, complex tasks.",
"After trying the 5 tasks, 69 % of users reported they prefer RUSS over navigating online help pages.",
"Reasons cited include ease of use, ef-ficiency, and speed, even though the times of completion were similar.",
"Participants were generally pleased with their RUSS experience, and only one person said that they were unlikely to use RUSS again (Fig. 8).",
"However, many users did report that they wished RUSS was as visually stimulating as the browser.",
"Other users noted that they felt more familiar and comfortable with the browser.",
"As a final discussion, it is worth noting that while the user study results are extremely promising, this is a small scale study.",
"RUSS 's runtime needs stronger error handling for out-of-context conversation.",
"Currently, RUSS gives the user 3 tries to return an expected response before terminating.",
"RUSS also times out if a webpage takes more than >60 seconds to load in Puppeteer.",
"We saw instances of both of these situations in the RUSS user study in the few cases the user failed to complete a task.",
"RUSS demonstrates how a semantic parser and grounding model can be used to perform unseen web tasks from natural language instructions.",
"By achieving 76.7% accuracy on the RUSS Evaluation Dataset, we show how a modular semantic parsing approach can outperform end-to-end neural models on this task, and demonstrate how humans interact with RUSS -like systems in the user study.",
"Like many datasets in NLP, we believe extensive research is still required to go from RUSS's 76.6% overall accuracy on the Evaluation Dataset to 100%.",
"As seen in Table 4, prior models like PhraseNode achieve only 46.5% grounding accuracy, which points to additional work necessary in grounding natural language on the web.",
"The RUSS Evaluation dataset introduces a set of real instructions for grounding language to executable actions on the web to evaluate future research in this direction, including training semantic parsers to new targets using real-world instructions and neural models for grounding formal language representations on the web.",
"Our work provides the task, technical foundation, and user research for developing open-domain web agents like RUSS .",
"The user study conducted in this paper was submitted to the Institutional Review Board and received",
"References James Allen, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, and William Taysom.",
"received IRB Exempt status.",
"All participants were read an IRB consent form prior to the user study, which detailed the study details.",
"No deception was involved in the study: all participants knew they were evaluating an AI agent in the conversation portion of the user study and were not led to believe otherwise.",
"They study took about 20 minutes.",
"All participants were compensated with $10.",
"The webpages scraped for the RUSS dataset are all public domain webpages.",
"No individual personal identifying information was used to obtain the webpages.",
"On websites that required accounts to access pages, we created fake user accounts with non-identifying usernames / passwords / emails to navigate the websites in order to limit any privacy risks that may be involved.",
"In the future, we see web agents like RUSS helping improve accessibility by helping individuals who are visually impaired, less technologically advance, or otherwise preoccupied receive equitable access to information.",
"Before systems like RUSS are put to practice at scale, the authors believe more research must be done in understanding user behavior with web agents to safeguard against downstream consequences of system errors and to better understand how information can be effectively delivered by AI agents that operate in potentially high-stakes transactions such as health or finance.",
"Our user study is the first step in this direction.",
"We thank Silei Xu for helpful discussions on constructing the Synthetic dataset, and Richard Socher for feedback and review of the final publication.",
"This work is supported in part by the National Science Foundation under Grant No. 1900638 and the Alfred P. Sloan Foundation under Grant No.",
"G-2020-13938.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements of outside organizations."
] | [
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"This paper proposes a novel document representation, called Multi-Resolution Representation (MulR), to improve the early detection of risks in social media sources.",
"The goal is to effectively identify the potential risk using as little evidence as possible and with as much anticipation as possible.",
"MulR allows us to generate multiple views of the text.",
"These views capture different semantic meanings for words and documents at different levels of granularity, which is very useful in early scenarios to model the variable amounts of evidence.",
"The experimental evaluation shows that MulR using low resolution is better suited for modeling short documents (very early stages), whereas large documents (medium/late stages) are better modeled with higher resolutions.",
"We evaluate the proposed ideas in two different tasks where anticipation is critical: sexual predator detection and depression detection .",
"The experimental evaluation for these early tasks revealed that the proposed approach outperforms previous methodologies by a considerable margin.",
"Everyday there is a huge amount of people interacting in many social media sites.",
"Unfortunately this immense cyber-world has been misused by cyber-criminals, who hide in the depths of the web.",
"For this reason, the social media information has been increasingly studied in the context of applications related to security, forensics and e-commerce.",
"Recently the early prediction scenarios have attracted the attention of the scientific community (Losada et al., 2017), which aims to prevent major threats in a number of practical situations by analyzing the text as evidence (e.g., sexual harassment, cyberbullying, etc).",
"In Natural Language Processing this emerging field is called early text classification and the goal is to identify risky-target categories by using as few text as possible and with as much anticipation as possible.",
"In real scenarios the amount of evidence available from users under analysis is continuously growing.",
"Consider for instance chat rooms, or posts and comments in social networks, these text sources comprise cumulative evidence for early prediction that can be used to better capture the phenomenon under study (Escalante et al., 2017; Losada et al., 2017).",
"This scenario has challenging particularities.",
"For example, in early stages where 10% or 20% of the information is available it is necessary to model very short length documents, which tend to produce sparse and low discriminative representations.",
"On the other hand late stages require to exploit as much evidence as possible to make accurate predictions.",
"This dynamism between the document length and classification stages makes necessary an adequate representation, that naturally copes with the dynamic amount of evidence in short and long texts generated by users at each stage.",
"Traditional textual representations, such as Bag-of-Words (BoW) (Joachims, 1998), have problems dealing with social media short texts since they cause the representation to be high dimensional and very sparse.",
"Moreover, in the particular case of early risk prediction, class unbalance and noisy text also represent a challenge.",
"In this paper we propose a representation that deals with these challenges by taking advantage of word vectors into a novel methodology for representing documents.",
"This representation generates high-level features, that we called meta-words, which capture concepts at different resolution levels.",
"A meta-word is a primitive construction represented by a vector that summarizes the information of semantically related words.",
"Our methodology associates words with similar semantic meaning to the same meta-words.",
"These meta-words 1216 are obtained by applying clustering techniques to word representations, where the resultant cen-troids comprise the meta-words.",
"Documents are then represented by a Bag-of-Centroids (BoC), that is, a histogram accounting for the occurrence of coarse thematic/semantic primitives, i.e., the meta-words.",
"This part of the work is inspired by the Bag-of-Visual-Words (BoVW), which is widely used in computer vision to represent images (Sivic and Zisserman, 2004; Lazebnik et al., 2006).",
"The key aspect for early scenarios is that the number and size of meta-words, allow us to manipulate the level of granularity or the resolution of the representation.",
"This property is very useful to capture discriminative information along the growing amount of available evidence at each early stage.",
"We thus propose a multi-resolution approach, in which primitives at different resolutions are combined to capture feature concepts at multiple levels of detail.",
"The contributions of this paper are twofold:",
"(i) a new Multi-Resolution (MulR) document representation, a generalization to represent documents by exploiting word-vectors at different levels of resolutions;",
"(ii) an empirical validation of the usefulness of multiple resolution levels for early risk detection on social media documents.",
"Our experimental results show that this approach is a promising alternative for early text classification scenarios, where there is a need to make predictions as soon as possible, with little evidence, while at the same time, being robust to incorporate more evidence as it becomes available.",
"We recorded experimental results of an extensive evaluation of our proposed techniques over two benchmarks for early scenarios: sexual predator detection and depression detection.",
"Results showed that in all cases our methodology outperforms state-of-the-art methodologies.",
"Interestingly, document representations based on partitioning the word-embedding space, like ours, are somewhat similar to topic modeling based representations.",
"In the experimental section we also compare the performance of our method to different topic-based representations like Latent Semantic Analysis (LSA) (Deerwester et al., 1990) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003).",
"Experimental results showed that our method outperformed the reference techniques.",
"We elaborate on the benefits and limitations of our proposed techniques later in this paper.",
"The Early Text Categorization problem is an emerging research topic with scant work (Dulac-Arnold et al., 2011; Escalante et al., 2016, 2017).",
"Recently, the relevance of the problem has motivated specialized forums such as eRisk-CLEF17 (Losada et al., 2017).",
"One of the first attempts is based on processing documents in a sentence-level basis (Dulac-Arnold et al., 2011).",
"At every time t , the method reads a sentence and attempts to determine the class of the document.",
"The key aspect of the work is a Markov Decision Process (MDP), where each sentence is modeled in a TFIDF vector.",
"More recently, (Escalante et al., 2016) proposed a straightforward solution for early detection scenarios by using the nave Bayes classifier.",
"The idea consists in training with full documents, but when partial information has to be classified, the maximum a posteriori probability was estimated over the available text.",
"Using this simple yet effective approach, the authors obtained competitive performance with the method in (Dulac-Arnold et al., 2011).",
"Furthermore, results reported in (Escalante et al., 2016) were the first evaluation on early sexual predator detection.",
"In (Escalante et al., 2017) the authors propose methods to exploit Profile Based Representations (PBR's) for words (Lopez-Monroy et al., 2015).",
"PBRs are Distributional Term Representations of terms in the vocabulary.",
"Similar to word embeddings these representations build a vector for each word, which aim to extract/learn concepts from simple occurrence statistics of terms in the target classes.",
"PBRs capture discriminative information in a very low dimensional and non-sparse space suitable for early text classification problems.",
"In other work, (Errecalde et al., 2017) successfully adapted a version of PBR's for the problem of early depression detection in the context of the eRisk-CLEF17 shared task.",
"The evidence about PBRs suggests that this representation can naturally cope with missing information and obtain discriminative representations for incomplete documents.",
"Nevertheless, just as the vast majority of word embeddings in the literature for standard text classification, there is no consensus about how to exploit these term vectors to represent entire phrases or documents (e.g., the most common strategy is to average the term vectors in documents).",
"The proposed method is based on creating meta-1217 words to represent documents.",
"Clustering words into meaningful groups based on some measure of similarity to represent text is not a new concept.",
"One of the classic approaches is term clustering in an unsupervised manner that was first investigated by (Lewis, 1992).",
"He called his method reciprocal nearest neighbor clustering .",
"His method consists of joining words that are similar according to a measure of similarity.",
"In other work, Brown et al. (1992) explored the idea of discovering similarities between words to obtain clusters at different levels.",
"One key difference with our proposal is that in (Brown et al., 1992), terms are deterministically/probabilistically associated with a discrete class, where terms that are in the same class are similar in some aspect.",
"However in our proposed strategy, we exploit word vectors instead of a discrete random deterministic variable (e.g., soft/hard partitions of word sets).",
"This makes possible to discover different clusters and meta-words if we change the word representation.",
"Thus, the proposed strategy is highly adaptable to other domains, where the specialization would be achieved by changing the word representation for the problem.",
"In other work, Li and Jain (1998) found that term grouping helps to reduce the feature dimensionality, and at the same time, overcomes the generalization problem of feature selection.",
"The evidence has showed that the performance of the classifier is, at least maintained (Li and Jain, 1998; Slonim and Tishby, 2001).",
"Finally, other authors have also studied the problem of term clustering under a supervised scheme.",
"For example, Baker and McCallum (1998) used a supervised scheme to cluster similar words.",
"They carried out experiments using a Naive Bayes classifier and found results improvement by using a single word representation.",
"The methods proposed in this research work follow a line of thinking focused on the document representation rather than term representation.",
"Hence, the proposed method takes advantage of specialized vector representation of words (e.g., PBR), but several extensions can be envisioned using other word embeddings in the literature.",
"The benefits of our approach are that it is model independent, easy to implement, and computes lower dimensional and less-sparse representations than traditional BoW.",
"More important, our method improves over state of the art methods, outperforming the methods in (Errecalde et al., 2017; Escalante et al., 2017) that in turn, outperform that in (Dulac-Arnold et al., 2011; Escalante et al., 2016).",
"We propose a multi resolution representation that allows to generate multiple views of the analyzed document.",
"The intuition behind the proposal of a multi-resolution representation is that words will activate differently each view according to the amount of available text.",
"We assume that having different resolution levels will allow to effectively represent the content of short and large texts as needed along different early stages.",
"The proposed multi-resolution framework is depicted in Figure",
"1. The idea consists in associating words with similar meaning to the same meta-words in each resolution space.",
"Documents are then represented by multiple Bag-of-Centroids (BoC), that is, multiple histograms accounting for the occurrence of coarse concepts.",
"Hence, this representation can be seen as multiple BoW representations that incorporate multiple semantic resolutions.",
"In Section 3.1 we describe the process to build a Bag-of-Centroids at a single resolution, then in Section 3.2 we formally present the Multi-Resolution variant.",
"Let D = { ( d 1 , y 1 ) , . . . , ( d h , y h ) } be a training set of h -pairs of documents d i and class labels y i .",
"Also let V = { w 1 , . . . , w r } denote the vocabulary of terms (in our case words).",
"In order to create the Bag of Centroids ( BoC ) representation of each document, we first compute the vector representation v i of each word w i in the vocabulary of the collection.",
"Note that our framework is agnostic to the underlying process for learning word representations and therefore any word vector representation can be used, for example word embeddings (Mikolov et al., 2013) or distributional term representations (Lavelli et al., 2004).",
"The proposed framework is based on the idea of clustering words using the semantic distance in the word embedding space.",
"Thus, the first step of the algorithm consists of clustering the word embedding vectors v i and finding the cluster centers to create the proposed meta-words.",
"The representation of the vocabulary collection in the word embedding space W is the input for the clustering 1218 Figure 1: Algorithm to represent documents as meta-words using three hypothetical resolutions.",
"algorithm.",
"For this purpose a variety of clustering approaches can be used.",
"In our experimental evaluation, we explored different algorithms and found out that k -means offers a good tradeoff between performance and speed.",
"We applied k -means to the W representation to find the center of the clusters C = { c 1 , c 2 , . . . , c k } , with k being the number of selected centroids.",
"Then, based on these cluster centers and using l 1 -norm, we found a one-to-one association of each word to the closest cluster center in the word embedding space.",
"In other words, for each word v i , we can find an associated cluster center or meta-word c u with u { 1 , 2 , . . . , k } .",
"We denote this mapping by c u = closest ( v i , C ) , where closest returns the centroid in C with the minimum distance to v i .",
"Finally, the BoC k representation for each document d j corresponds to BoC k ( d j ) = { ( c , n ) } =1 ...k where c corresponds to each of the k centroids and n = |{ v i | v i d j , c = closest ( v i , C ) }| . In other words, BoC k ( d j ) corresponds to a histogram of centroid frequencies, where each pair ( c , n ) represents a centroid (meta-word) and its corresponding frequency in the document. The BoC algorithm depends on one parameter: the number of clusters used to represent each document. This parameter is associated with the level of semantic coarseness used in the representation. In this regard, coarseness refers to the level of meta-word inclusivity: the more words associated with a single meta-word, the coarser the representation. Conversely, with fewer words, the representation becomes more granular. Note that this representation has well known parallels in the extreme cases. When each word becomes a centroid, the resulting representation is equivalent to the typical BoW representation, whereas a coarser representation, with only one meta word, will be equivalent to having the average meta-word of the entire collection. 3.2 Multi-Resolution BoC The above proposed framework is particularly suitable for incorporating multi-resolution processing, given that the main parameter is related to the granularity or coarseness of the representation. As we will show in our analysis, this property is useful for early scenarios, since few/coarse meta-words allow to better encode documents with little text, whereas many/granular meta-words are useful when more text become available. We propose to exploit this multi-resolution version of the BoC representation. In this extension of the basic algorithm, we use a partition of the word embedding space at multiple levels and concatenate them into a new representation. Combining the different granularities into a single representation results in a more robust document model that can help to capture different amounts of text as needed. Intuitively, the coarser levels sufficiently classify documents in early stages, while the more granular levels exploit the additional evidence from longer documents on late stages. We present quantitative and qualitative experiments that support this claim in two datasets: Sexual Predator Detection and Depression Detection. We call this variation of the BoC representation Multi-Resolution-BoC ( MulR ). Formally: MulR ( d j ) = { BoC k 1 ( d j ) BoC k 2 ( d j ) . . . BoC k n ( d j ) } , where { k 1 , k 2 , . . . k n } correspond to a set of granular levels. Figure 1 shows the general framework, graphically depicting the process involved in transforming a document into a representation based on meta-words. The fig-ure also includes the process of multi-resolution modification described above. In the figure, the 1219 meta-words depicted with 'blue' represent the more granular clusters, and those depicted with 'green' represent the less granular clusters.",
"The multi-resolution BoC variation improves the performance by combining the information present at various levels of granularity.",
"Moreover, when documents are closely related, more fine grained features allow to capture finer details and therefore produces better text classification results.",
"This multi-resolution approach combines the advantages of both approaches to create an overall more effective classification method.",
"For experiments we considered the two data sets described in Table",
"2. The tasks are Sexual Predator Detection (SPD) and Depression Detection, where clearly early detection is crucial.",
"For the former we used the only publicly available data set for sexual predator detection (Inches and Crestani, 2012).",
"This data set was released in the context of the sexual predator identification task at PAN-CLEF'12 and comprises a large number of chat conversations that include real sexual predators.",
"Thus, the task approached is that of identifying those conversations that potentially include a sexual predator, as in (Villatoro-Tello et al., 2012; Escalante et al., 2013, 2016).",
"For the depression detection task we use the dataset presented in (Losada et al., 2017).",
"In this dataset, each instance has the post history for a user, and depressed users were self-identified as having been diagnosed with depression.",
"For our experiments, we lower case the text in documents and use words and punctuation marks as terms 1 .",
"The representation obtained for each document is then processed by a Support Vector Machine (SVM) with a linear kernel.",
"For the evaluation of the earliness performance, we report the performance of the different methods when using increasing amounts of textual evidence (chunk by chunk evaluation).",
"This evaluation allows to quantify prediction performance when using partial information in documents, and it is a strategy that has been used to evaluate early classification (Escalante et al., 2016; Errecalde et al., 2017; Losada et al., 2017).",
"For 1 We used terms with frequency higher than 10 in the training datasets.",
"the evaluation of performance we used the f 1 = 2 precision recall precision + recall measure.",
"This decision was made in agreement with previous work that reports this metric for the positive class (Errecalde et al., 2017).",
"Please note that, contrary to other measures, such as accuracy, f 1 measure accounts for the class imbalance problem when only the positive class is analyzed.",
"This is desirable for the data sets we consider as they are highly unbalanced.",
"Word-vector representations: As previously mentioned, the proposed MulR representation generalizes word-vector representations and thus can extend any representation that models each term in the vocabulary using a vector.",
"For this purpose a wide variety of word embeddings or distributional term representations could be used.",
"Both of them exploit the distributional hypothesis to build word vectors, nonetheless they differ in the strategy to capture the relevant information.",
"In this work we use the widely used word2vec, but also other representations that have been used in recent works for these collections.",
"In Table 1 we describe each of the word vector representations considered for this work 2 .",
"Baselines: The main baselines in this work are methods based on the idea of topic modeling for text classification.",
"Topic-based representations group words into topics defined by a set of related words 3 .",
"Given the strong relation to our method we compare our proposal against Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA).",
"Furthermore, we also compare with Bag-of-Words using Term Frequency Inverse Document Frequency, since it is a traditional baseline in text categorization tasks.",
"In this section we report the experimental results for the MulR representation and the selected approaches from the state-of-the-art.",
"In all the experiments we trained the reference classifier (SVM) using full-length documents in the training dataset.",
"In the testing phase, each approach uses all the available information in each of the ten chunks (each chunk increases the available text in 10%).",
"More specifically, we generate document representations starting with the first chunk, and then incrementally adding one chunk at a time.",
"The 2 For distributional representations we used the framework at https://github.com/lopez-monroy/FeatureSpaceTree 3 We empirically set to 200 the size of the concept space.",
"models will then make predictions incrementally as well.",
"We report f 1 performance when using different amounts of text from test documents.",
"For the proposed MulR representation, we build 5 different resolutions: 10, 50, 100, 500, and 1000.",
"The goal was to generate meta-words at different levels of granularity, and we plan to further explore the impact of these resolutions in our future research.",
"In the following experiments, we used the word representations in Table 1 to build our proposed MulR document representation.",
"For comparison purposes we also generate an alternative document representation by averaging (Avg) term-vectors of words in each document, which is a popular strategy to build document representations.",
"Finally, we also compare against several traditional baselines such as the Bag-of-Words, LSA, and specialized methods in each collection (Escalante et al., 2017; Errecalde et al., 2017).",
"We evaluate the usefulness of all these different representations in the two early classification tasks mentioned earlier.",
"methodologies for the SPD early detection task (Figure 2).",
"We also show results for MulR and different word representations in Table 3, where several findings can be outlined.",
"First of all, results obtained in early stages (chunk 1 to 4) using the proposed MulR are clearly superior to those obtained averaging word vectors.",
"This is an interesting outcome, since the MulR representation seems to be useful for early scenarios independently of the word vector representation.",
"In the particular case of MulR(TVT), the representation obtains an outstanding performance when having little information (e.g., performance between 71% and 90% before reading 50% of the text).",
"More important, performance improves as more evidence is available (i.e., see the steady improvement up to 97%).",
"These results show that MulR is a robust representation, even in the presence of different amounts of textual evidence, with a clear advantage for early classification stages.",
"In Figure 2 we can also observe that MulR representation outperformed, by a large margin, the proposed baselines; BoW-TFIDF, LSA, LDA.",
"Furthermore, MulR representation obtains better performance than the work in (Escalante et al., 2017), which consists in averaging the PBRs (same that Avg-PBR) and is the state-of-the-art in early SPD.",
"Note that different than (Escalante et al., 2017), the proposed MulR significantly improves even after reading 40% of the information.",
"The experimental results in Table 3 also show the 1221 Figure 2: F 1 scores for the chunk by chunk evaluation of the reference methodologies in Sexual Predator Detection.",
"1. The most useful word vector representation is TVT (Errecalde et al., 2017).",
"This is not surprising, since TVT is a specialized distribution term representation for early prediction scenarios.",
"2. Word2Vec 4 representations obtained moderate performance in all experiments.",
"We infer that much more data of these specific social media domains are needed in order to build suitable models.",
"4 Embeddings were trained in each dataset.",
"We tested pre-trained word embeddings for wikipedia/twitter, but the performance was worse.",
"3. MulR representation is an effective solution for all early chunks, but as more text is available, the other methodologies significantly increase their discriminative power, as seen in results for later chunks.",
"In fact, some representations such as Avg(DOR) can outperform MulR(DOR) representation in late stages.",
"However, even under these conditions MulR(TVT) and MulR(DOR) outperform all reference methodologies.",
"In Table 4 we show the experimental results for early depression detection.",
"In Figure 3 we highlight the performance of the proposal and the reference methodologies.",
"From these results we point 1222 Figure 3: F 1 scores for the chunk by chunk evaluation of the reference methodologies in Depression Detection.",
"out several interesting findings.",
"The first one is that for this collection, results obtained by the proposal are clearly superior to others in early stages.",
"In general, we can observe the following:",
"1. The most useful representation in early stages was MulR(TVT), which have considerable improvements between 5% and 2% in chunks 1 to 4.",
"2. Word Embeddings and DOR showed a similar behavior than in SPD.",
"But in late stages, the best representation was Avg(DOR).",
"3. Depression Detection problem is a much harder problem than SPD.",
"The F 1 measure is under 60% in most of the results.",
"This could be due to the highly unbalanced dataset in two ways:",
"i) the number of instances in each class, and",
"ii) the amount of text contained in documents.",
"In this section we aim to study the role of the different resolutions in early scenarios.",
"5 The purpose of the first analysis is to observe the performance of each individual resolution in MulR.",
"In Table 5 we show the results of MulR(TVT) under each of the five resolutions ( R 1 = 10 , R 2 = 50 , R 3 = 100 , R 4 = 500 , R 5 = 1000 ) and each chunk.",
"5 The number and size of resolutions, could improve the performance, but it is a future research path to enhance the characterization of specific data sets.",
"For early SPD the evidence is clear; as the resolution increases the performance in early stages decrease.",
"6 Also note that the higher the resolution, the more chunks needed to outperform the result of the previous resolution.",
"For example, resolution R 3 outperforms R 2 in chunk 8.",
"Also note that R 4 and R 5 needed more chunks to obtain comparable performance than R 2 .",
"Our experimental results excluding one resolution at the time showed worse performance, therefore all of them are essential in the overall classification.",
"Clearly, this evidence shows that the MulR representation is in fact very useful.",
"In Table 6 we provide further evidence about the role of different resolutions.",
"In this complementary analysis we study each chunk at test data.",
"For this we use the MulR learned in training to represent test documents, then we compute the Information Gain using Weka (Hall et al., 2009) at each test chunk.",
"In Table 6 we show the number of features in each resolution R i that are present in the top ten meta-words of the MulR(TVT).",
"The analysis complements the evidence, lower resolutions have higher IG at early chunks, whereas higher 6 The only exception to this is R 1 , which has the lowest overall performance.",
"This is somewhat expected since this space only has 10 features to represent documents.",
"In this paper we proposed a multi resolution representation that allows to generate multiple views of the document.",
"Intuitively these views expose different semantic meanings for words and documents along different resolutions.",
"The different resolutions allow to effectively represent the content of short and large texts at different early stages.",
"The MulR obtained the best results reported so far on the early Sexual Predator Detection task dataset (Inches and Crestani, 2012).",
"For Depression Detection the chunk by chunk evaluation shows promising results for MulR in early stages.",
"What is more, it was shown that the MulR further improves the early recognition performance in the two tasks using different word representations.",
"The relevance of the resolutions in these results is a key factor to understand the proposed MulR and future extensions.",
"These results provide solid evidence to further research on this topic and encourage researchers to apply and evaluate the usefulness of multi-resolution features for other related early tasks.",
"This research was partially supported by CONACYT-Mexico (project FC-2016/2410)."
] | [
"objective",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"method",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Evaluation of cross-lingual encoders is usually performed either via zero-shot cross-lingual transfer in supervised downstream tasks or via unsupervised cross-lingual textual similarity.",
"In this paper, we concern ourselves with reference-free machine translation (MT) evaluation where we directly compare source texts to (sometimes low-quality) system translations, which represents a natural adversarial setup for multilingual encoders.",
"Reference-free evaluation holds the promise of web-scale comparison of MT systems.",
"We systematically investigate a range of metrics based on state-of-the-art cross-lingual semantic representations obtained with pretrained M-BERT and LASER.",
"We find that they perform poorly as semantic encoders for reference-free MT evaluation and identify their two key limitations, namely,",
"(a) a semantic mismatch between representations of mutual translations and, more prominently,",
"(b) the inability to punish translationese, i.e., low-quality literal translations.",
"We propose two partial remedies: (1) post-hoc re-alignment of the vector spaces and (2) coupling of semantic-similarity based metrics with target-side language modeling.",
"In segment-level MT evaluation, our best metric surpasses reference-based BLEU by 5.7 correlation points.",
"We make our MT evaluation code available.",
"1 1 Introduction A standard evaluation setup for supervised machine learning (ML) tasks assumes an evaluation metric which compares a gold label to a classifier prediction.",
"This setup assumes that the task has clearly defined and unambiguous labels and, in most cases, that an instance can be assigned few labels.",
"These assumptions, however, do not hold for natural language generation (NLG) tasks like machine trans-1 https://github.com/AIPHES/ ACL20-Reference-Free-MT-Evaluation lation (MT) (Bahdanau et al., 2015; Johnson et al., 2017) and text summarization (Rush et al., 2015; Tan et al., 2017), where we do not predict a single discrete label but generate natural language text.",
"Thus, the set of labels for NLG is neither clearly defined nor finite.",
"Yet, the standard evaluation protocols for NLG still predominantly follow the described default paradigm: (1) evaluation datasets come with human-created reference texts and (2) evaluation metrics, e.g., BLEU (Papineni et al., 2002) or METEOR (Lavie and Agarwal, 2007) for MT and ROUGE (Lin and Hovy, 2003) for summarization, count the exact label (i.e., n -gram) matches between reference and system-generated text.",
"In other words, established NLG evaluation compares semantically ambiguous labels from an unbounded set (i.e., natural language texts) via hard symbolic matching (i.e., string overlap).",
"The first remedy is to replace the hard symbolic comparison of natural language labels with a soft comparison of texts' meaning, using semantic vector space representations.",
"Recently, a number of MT evaluation methods appeared focusing on semantic comparison of reference and system translations (Shimanaka et al., 2018; Clark et al., 2019; Zhao et al., 2019).",
"While these correlate better than n -gram overlap metrics with human assessments, they do not address inherent limitations stemming from the need for reference translations, namely: (1) references are expensive to obtain; (2) they assume a single correct solution and bias the evaluation, both automatic and human (Dreyer and Marcu, 2012; Fomicheva and Specia, 2016), and (3) limitation of MT evaluation to language pairs with available parallel data.",
"Reliable reference-free evaluation metrics, directly measuring the (semantic) correspondence between the source language text and system translation, would remove the need for human references and allow for unlimited MT evaluations: any monolingual corpus could be used for evaluating MT systems.",
"However, the proposals of reference-free MT evaluation metrics have been few and far apart and have required either non-negligible supervision (i.e., human translation quality labels) (Spe-cia et al., 2010) or language-specific preprocessing like semantic parsing (Lo et al., 2014; Lo, 2019), both hindering the wide applicability of the proposed metrics.",
"Moreover, they have also typically exhibited performance levels well below those of standard reference-based metrics (Ma et al., 2019).",
"In this work, we comparatively evaluate a number of reference-free MT evaluation metrics that build on the most recent developments in multilingual representation learning, namely cross-lingual contextualized embeddings (Devlin et al., 2019) and cross-lingual sentence encoders (Artetxe and Schwenk, 2019).",
"We investigate two types of crosslingual reference-free metrics: (1) Soft token-level alignment metrics find the optimal soft alignment between source sentence and system translation using Word Mover's Distance (WMD) (Kusner et al., 2015).",
"Zhao et al. (2019) recently demonstrated that WMD operating on BERT representations (De-vlin et al., 2019) substantially outperforms baseline MT evaluation metrics in the reference-based setting.",
"In this work, we investigate whether WMD can yield comparable success in the reference-free (i.e., cross-lingual) setup; (2) Sentence-level similarity metrics measure the similarity between sentence representations of the source sentence and system translation using cosine similarity.",
"Our analysis yields several interesting findings.",
"(i) We show that, unlike in the monolingual reference-based setup, metrics that operate on contextualized representations generally do not outperform symbolic matching metrics like BLEU, which operate in the reference-based environment.",
"(ii) We identify two reasons for this failure:",
"(a) firstly, cross-lingual semantic mismatch, especially for multi-lingual BERT (M-BERT), which construes a shared multilingual space in an unsupervised fashion, without any direct bilingual signal;",
"(b) secondly, the inability of the state-of-the-art crosslingual metrics based on multilingual encoders to adequately capture and punish translationese, i.e., literal word-by-word translations of the source sentenceas translationese is an especially persistent property of MT systems, this problem is particularly troubling in our context of reference-free MT evaluation.",
"(iii) We show that by executing an additional weakly-supervised cross-lingual re-mapping step, we can to some extent alleviate both previous issues.",
"(iv) Finally, we show that the combination of cross-lingual reference-free metrics and language modeling on the target side (which is able to detect translationese), surpasses the performance of reference-based baselines.",
"Beyond designating a viable prospect of web-scale domain-agnostic MT evaluation, our findings indicate that the challenging task of reference-free MT evaluation is able to expose an important limitation of current state-of-the-art multilingual encoders, i.e., the failure to properly represent corrupt input, that may go unnoticed in simpler evaluation setups such as zero-shot cross-lingual text classifi-cation or measuring cross-lingual text similarity not involving adversarial conditions.",
"We believe this is a promising direction for nuanced, fine-grained evaluation of cross-lingual representations, extending the recent benchmarks which focus on zero-shot transfer scenarios (Hu et al., 2020).",
"Manual human evaluations of MT systems undoubtedly yield the most reliable results, but are expensive, tedious, and generally do not scale to a multitude of domains.",
"A significant body of research is thus dedicated to the study of automatic evaluation metrics for machine translation.",
"Here, we provide an overview of both reference-based MT evaluation metrics and recent research efforts towards reference-free MT evaluation, which leverage cross-lingual semantic representations and unsupervised MT techniques.",
"Reference-based MT evaluation.",
"Most of the commonly used evaluation metrics in MT compare system and reference translations.",
"They are often based on surface forms such as n -gram overlaps like BLEU (Papineni et al., 2002), SentBLEU, NIST (Doddington, 2002), chrF++ (Popovic, 2017) or METEOR++(Guo and Hu, 2019).",
"They have been extensively tested and compared in recent WMT metrics shared tasks (Bojar et al., 2017a; Ma et al., 2018a, 2019).",
"These metrics, however, operate at the surface level, and by design fail to recognize semantic equivalence lacking lexical overlap.",
"To overcome these limitations, some research efforts exploited static word embeddings (Mikolov et al., 2013b) and trained embedding-based supervised metrics on sufficiently large datasets with available human judgments of translation quality (Shimanaka et al., 2018).",
"With the development of contextual word embeddings (Peters et al., 2018; Devlin et al., 2019), we have witnessed proposals of semantic metrics that account for word order.",
"For example, Clark et al. (2019) introduce a semantic metric relying on sentence mover's similarity and the contextualized ELMo embeddings (Peters et al., 2018).",
"Similarly, Zhang et al. (2019) describe a reference-based semantic similarity metric based on contextualized BERT representations (Devlin et al., 2019).",
"Zhao et al. (2019) generalize this line of work with their MoverScore metric, which computes the mover's distance, i.e., the optimal soft alignment between tokens of the two sentences, based on the similarities between their contextualized embeddings.",
"Mathur et al. (2019) train a supervised BERT-based regressor for reference-based MT evaluation.",
"Reference-free MT evaluation.",
"Recently, there has been a growing interest in reference-free MT evaluation (Ma et al., 2019), also referred to as quality estimation (QE) in the MT community.",
"In this setup, evaluation metrics semantically compare system translations directly to the source sentences.",
"The attractiveness of automatic reference-free MT evaluation is obvious: it does not require any human effort or parallel data.",
"To approach this task, Popovic et al. (2011) exploit a bag-of-word translation model to estimate translation quality, which sums over the likelihoods of aligned word-pairs between source and translation texts.",
"Specia et al. (2013) estimate translation quality using language-agnostic linguistic features extracted from source lanuage texts and system translations.",
"Lo et al. (2014) introduce XMEANT as a crosslingual reference-free variant of MEANT, a metric based on semantic frames.",
"Lo (2019) extended this idea by leveraging M-BERT embeddings.",
"The resulting metric, YiSi-2, evaluates system translations by summing similarity scores over words pairs that are best-aligned mutual translations.",
"YiSi-2-SRL optionally combines an additional similarity score based on the alignment over the semantic structures (e.g., semantic roles and frames).",
"Both metrics are reference-free, but YiSi-2-SRL is not resource-lean as it requires a semantic parser for both languages.",
"Moreover, in contrast to our proposed metrics, they do not mitigate the misalignment of cross-lingual embedding spaces and do not integrate a target-side language model, which we identify to be crucial components.",
"Recent progress in cross-lingual semantic similarity (Agirre et al., 2016; Cer et al., 2017) and unsupervised MT (Artetxe and Schwenk, 2019) has also led to novel reference-free metrics.",
"For instance, Yankovskaya et al. (2019) propose to train a metric combining multilingual embeddings extracted from M-BERT and LASER (Artetxe and Schwenk, 2019) together with the log-probability scores from neural machine translation.",
"Our work differs from that of Yankovskaya et al. (2019) in one crucial aspect: the cross-lingual reference-free metrics that we investigate and benchmark do not require any human supervision.",
"Cross-lingual Representations.",
"Cross-lingual text representations offer a prospect of modeling meaning across languages and support crosslingual transfer for downstream tasks (Klementiev et al., 2012; Ruckle et al., 2018; Glavas et al., 2019; Josifoski et al., 2019; Conneau et al., 2020).",
"Most recently, the (massively) multilingual encoders, such as multilingual M-BERT (Devlin et al., 2019), XLM-on-RoBERTa (Conneau et al., 2020), and (sentence-based) LASER, have profiled themselves as state-of-the-art solutions for (massively) multilingual semantic encoding of text.",
"While LASER has been jointly trained on parallel data of 93 languages, M-BERT has been trained on the concatenation of monolingual data in more than 100 languages, without any cross-lingual mapping signal.",
"There has been a recent vivid discussion on the cross-lingual abilities of M-BERT (Pires et al., 2019; K et al., 2020; Cao et al., 2020).",
"In particular, Cao et al. (2020) show that M-BERT often yields disparate vector space representations for mutual translations and propose a multilingual remapping based on parallel corpora, to remedy for this issue.",
"In this work, we introduce re-mapping solutions that are resource-leaner and require easy-to-obtain limited-size word translation dictionaries rather than large parallel corpora.",
"In the following, we use x to denote a source sentence (i.e., a sequence of tokens in the source lan-guage), y to denote a system translation of x in the target language, and y (cid:63) to denote the human reference translation for x .",
"We start from the MoverScore (Zhao et al., 2019), a recently proposed reference-based MT evaluation",
"metric designed to measure the semantic similarity between system outputs ( y ) and human references ( y (cid:63) ).",
"It finds an optimal soft semantic alignments between tokens from y and y (cid:63) by minimizing the Word Mover's Distance (Kusner et al., 2015).",
"In this work, we extend the MoverScore metric to operate in the cross-lingual setup, i.e., to measure the semantic similarity between n -grams (unigram or bigrams) of the source text x and the system translation y , represented with embeddings originating from a cross-lingual semantic space.",
"First, we decompose the source text x into a sequence of n -grams, denoted by x n = ( x n 1 , . . . , x nm ) and then do the same operation for the system translation y , denoting the resulting sequence of n-grams with y n .",
"Given x n and y n , we can then define a distance matrix C such that C ij = (cid:107) E ( x ni ) E ( y nj ) (cid:107) 2 is the distance between the i -th n -gram of x and the j -th n -gram of y , where E is a cross-lingual embedding function that maps text in different languages to a shared embedding space.",
"With respect to the function E , we experimented with cross-lingual representations induced",
"(a) from static word embeddings with RCSLS (Joulin et al., 2018))",
"(b) with M-BERT (Devlin et al., 2019) as the multilingual encoder; with a focus on the latter.",
"For M-BERT, we take the representations of the last transformer layer as the text representations.",
"WMD between the two sequences of n -grams x n and y n with associated n -gram weights 2 to f x n R | x n | and f y n R | y n | is defined as: m ( x , y ) := WMD ( x n , y n ) = min F (cid:88) ij C ij F ij , s.t. F 1 = f x n , F (cid:124) 1 = f y n , where F R | x n || y n | is a transportation matrix with F ij denoting the amount of flow traveling from x n i to y n j .",
"In addition to measuring semantic distance between x and y at word-level, one can also encode them into sentence representations with multilingual sentence encoders like LASER (Artetxe and Schwenk, 2019), and then measure their cosine distance",
"Initial analysis indicated that, despite the multilingual pretraining of M-BERT (Devlin et al., 2019) and LASER (Artetxe and Schwenk, 2019), the monolingual subspaces of the multilingual spaces they induce are far from being semantically well-aligned, i.e., we obtain fairly distant vectors for mutual word or sentence translations.",
"3 To this end, we apply two simple, weakly-supervised linear projection methods for post-hoc improvement of the cross-lingual alignments in these multilingual representation spaces.",
"Notation.",
"Let D = { ( w 1 (cid:96) , w 1 k ) , . . . , ( w n(cid:96) , w nk ) } be a set of matched word or sentence pairs from two different languages (cid:96) and k .",
"We define a remapping function f such that any f ( E ( w (cid:96) )) and E ( w k ) are better aligned in the resulting shared vector space.",
"We investigate two resource-lean choices for the re-mapping function f .",
"Linear Cross-lingual Projection (CLP).",
"Following related work (Schuster et al., 2019), we re-map contextualized embedding spaces using linear projection.",
"Given (cid:96) and k , we stack all vectors of the source language words and target language words for pairs D , respectively, to form matrices X (cid:96) and X k R n d , with d as the embedding dimension and n as the number of word or sentence alignments.",
"The word pairs we use to calibrate M-BERT are extracted from EuroParl (Koehn, 2005) using FastAlign (Dyer et al., 2013), and the sentence pairs to calibrate LASER are sampled directly from EuroParl.",
"4 Mikolov et al. (2013a) propose to learn a projection matrix W R d d by minimizing the Euclidean distance beetween the projected source language vectors and their corresponding target language vectors: min W (cid:107) W X (cid:96) X k (cid:107) 2 .",
"Xing et al. (2015) achieve further improvement on the task of bilingual lexicon induction (BLI) by constraining W to an orthogonal matrix, i.e., such that W (cid:124) W = I .",
"This turns the optimization into the well-known Procrustes problem (Schonemann, 1966) with the following closed-form solution: W = UV (cid:124) , U V (cid:124) = SVD ( X (cid:96) X (cid:124) k ) 3 LASER is jointly trained on parallel corpora of different languages, but in resource-lean language pairs, the induced embeddings from mutual translations may be far apart.",
"4 While LASER requires large parallel corpora in pretraining, we believe that fine-tuning/calibrating the embeddings post-hoc requires fewer data points.",
"We note that the above CLP re-mapping is known to have deficits, i.e., it requires the embedding spaces of the involved languages to be approximately isomorphic (Sgaard et al., 2018; Vulic et al., 2019).",
"Recently, some re-mapping methods that reportedly remedy for this issue have been suggested (Glavas and Vulic, 2020; Mohiuddin and Joty, 2020).",
"We leave the investigation of these novel techniques for our future work.",
"Universal Language Mismatch-Direction (UMD) Our second post-hoc linear alignment method is inspired by the recent work on removing biases in distributional word vectors (Dev and Phillips, 2019; Lauscher et al., 2019).",
"We adopt the same approaches in order to quantify and remedy for the language bias, i.e., representation mismatches between mutual translations in the initial multilingual space.",
"Formally, given (cid:96) and k , we create individual misalignment vectors E ( w i (cid:96) ) E ( w i k ) for each bilingual pair in D .",
"Then we stack these individual vectors to form a matrix Q R n d .",
"We then obtain the global misalignment vector v B as the top left singular vector of Q .",
"The global misalignment vector presumably captures the direction of the representational misalignment between the languages better than the individual (noisy) misalignment vectors E ( w i(cid:96) ) E ( w ik ) .",
"Finally, we modify all vectors E ( w (cid:96) ) and E ( w k ) , by subtracting their projections onto the global misalignment direction vector v B : f ( E ( w (cid:96) )) = E ( w (cid:96) ) cos ( E ( w (cid:96) ) , v B ) v B .",
"Language Model BLEU scores often fail to re-flect the fluency level of translated texts (Edunov et al., 2019).",
"Hence, we use the language model (LM) of the target language to regularize the crosslingual semantic similarity metrics, by coupling our cross-lingual similarity scores with a GPT language model of the target language (Radford et al., 2018).",
"We expect the language model to penalize translationese, i.e., unnatural word-by-word translations and boost the performance of our metrics.",
"5 4 Experiments In this section, we evaluate the quality of our MT reference-free metrics by correlating them with human judgments of translation quality.",
"5 We linearly combine the cross-lingual metrics with the LM scores using a coefficient of 0.1 for all setups.",
"We choose this value based on initial experiments on one language pair.",
"judgments are based on comparing human references and system predictions.",
"We will discuss this discrepancy in 5.3.",
"Word-level metrics.",
"We denote our word-level alignment metrics based on WMD as MOVERSCORE-NGRAM + ALIGN (EMBEDDING ), where ALIGN is one of our two post-hoc crosslingual alignment methods (CLP or UMD).",
"For example, MOVER -2 + UMD(M-BERT) denotes the metric combining MoverScore based on bigram alignments, with M-BERT embeddings and UMD as the post-hoc alignment method.",
"Sentence-level metric.",
"We denote our sentence-level metrics as: COSINE + ALIGN (EMBEDDING ).",
"For example, COSINE + CLP(LASER) measures the cosine distance between the sentence embeddings obtained with LASER, post-hoc aligned with CLP.",
"We collect the source language sentences, their system and reference translations from the WMT17-19 news translation shared task (Bojar et al., 2017b; Ma et al., 2018b, 2019), which contains predictions of 166 translation systems across 16 language pairs in WMT17, 149 translation systems across 14 language pairs in WMT18 and 233 translation systems across 18 language pairs in WMT19.",
"We evaluate for X-en language pairs, selecting X from a set of 12 diverse languages: German (de), Chinese (zh), Czech (cs), Latvian (lv), Finnish (fi), Russian (ru), and Turkish (tr), Gujarati (gu), Kazakh (kk), Lithuanian (lt) and Estonian (et).",
"Each language pair in WMT17-19 has approximately 3,000 source sentences, each associated to one reference translation and to the automatic translations generated by participating systems.",
"We compare with a range of reference-free metrics: ibm1-morpheme and ibm1-pos4gram (Popovic, 2012), LASIM (Yankovskaya et al., 2019), LP (Yankovskaya et al., 2019), YiSi-2 and YiSi-2-srl (Lo, 2019), and reference-based baselines BLEU (Papineni et al., 2002), SentBLEU (Koehn et al., 2007) and ChrF++ (Popovic, 2017) for MT evaluation (see 2).",
"6 The main results are reported on WMT17.",
"We report the results obtained on WMT18 and WMT19 in the Appendix.",
"6 The code of these unsupervised metrics is not released, thus we compare to their official results on WMT19 only.",
"Figure 1 shows that our metric MOVER -2 + CLP(M-BERT) LM, operating on modified M-BERT with the post-hoc re-mapping and combining a target-side LM, outperforms BLEU by 5.7 points in segment-level evaluation and achieves comparable performance in the system-level evaluation.",
"Figure 2 shows that the same metric obtains 15.3 points gains (73.1 vs. 57.8), averaged over 7 languages, on WMT19 (system-level) compared to the the state-of-the-art reference-free metric YiSi-2.",
"Except for one language pair, gu-en, our metric performs on a par with the reference-based BLEU (see Table 8 in the Appendix) on system-level.",
"In Table 1, we exhaustively compare results for several of our metric variants, based either on M-BERT or LASER.",
"We note that re-mapping has 2010 2012 2014 2016 2018 2020 0 20 40 60 80 100 P e a r s o n C o rr e l a t i o n ibm1-pos4gram:33.9 ibm1-morpheme:52.4 LASIM:56.2 LP:48.1 YiSi-2:57.8 This Work:73.1 System-level BLEU: 91.2 Figure 2: Average results of our metric best-performing metric, together with the official results of reference-free metrics, and reference-based BLEU on system-level WMT19.",
"considerable effect for M-BERT (up to 10 points improvements), but much less so for LASER.",
"We believe that this is because the underlying embedding space of LASER is less misaligned' since it has been (pre-)trained on parallel data.",
"7 While the re-mapping is thus effective for metrics based on M-BERT, we still require the target-side LM to outperform BLEU.",
"We assume the LM can address challenges that the re-mapping apparently is not able to handle properly; see our discussion in 5.1.",
"Overall, we remark that none of our metric com-7 However, in the appendix, we find that re-mapping LASER using 2k parallel sentences achieves considerable improvements on low-resource languages, e.g., kk-en (from -61.1 to 49.8) and lt-en (from 68.3 to 75.9); see Table 8.",
"binations performs consistently best.",
"The reason may be that LASER and M-BERT are pretrained over hundreds of languages with substantial differences in corpora sizes in addition to the different effects of the re-mapping.",
"However, we observe that MOVER -2 + CLP(M-BERT) performs best on average over all language pairs when the LM is not added.",
"When the LM is added, MOVER -2 + CLP(M-BERT) LM and COSINE + UMD (LASER) LM perform comparably.",
"This indicates that there may be a saturation effect when it comes to the LM or that the LM coefficients should be tuned individually for each semantic similarity metric based on cross-lingual representations.",
"We first analyze preferences of our metrics based on M-BERT and LASER ( 5.1) and then examine how much parallel data we need for re-mapping our vector spaces ( 5.2).",
"Finally, we discuss whether it is legitimate to correlate our metric scores, which evaluate the similarity of system predictions and source texts, to human judgments based on system predictions and references ( 5.3)."
] | [
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denoting only its meaning in a canonical form.",
"As such, it is ideal for paraphrase detection, a problem in which one is required to specify whether two sentences have the same meaning.",
"We show that na ve use of AMR in paraphrase detection is not necessarily useful, and turn to describe a technique based on latent semantic analysis in combination with AMR parsing that significantly advances state-of-the-art results in paraphrase detection for the Microsoft Research Paraphrase Corpus.",
"Our best results in the transductive setting are 86.6% for accuracy and 90.0% for F 1 measure.",
"Abstract Meaning Representation (AMR) parsing focuses on the conversion of natural language sentences into AMR graphs, aimed at abstracting away from the surface realizations of the sentences while preserving their meaning.",
"We make a first step towards showing that AMR can be used in practice for a task that requires identifying the canonicalization of language: paraphrase detection.",
"In a perfect world using AMR to test for paraphrasing relation of two sentences should be simple.",
"It would require finding the two AMR parses for each of the sentences, and then checking whether they are identical.",
"Since AMR is aimed at abstracting away from the surface form which is used to express meaning, two sentences should be paraphrases only if they have identical AMRs.",
"For instance, the three sentences:",
"1. He described her as a curmudgeon,",
"2. His description of her: curmudgeon,",
"3. She was a curmudgeon, according to his description.",
"should result in the same AMR graph as shown in Figure",
"1. However, in practice, things are different.",
"First, there are no known AMR parsers that really distil only the meaning in text.",
"For example, predicates which have interchangeable meaning use different AMR concepts, and there are errors that exist because of the machine learning techniques that are used for learning the parsers from data.",
"Finally, even human annotations do not yield per-fect AMRs, as the interannotator agreement reported in the literature for AMR is around 80% (Banarescu et al., 2013).",
"Second, meaning is often contextual, and it is not fully possible to determine the corresponding AMR parse just by looking at a given sentence.",
"Entity mentions denote different entities in different contexts, and similarly predicates and nouns are ambiguous and depend on context.",
"As such, one cannot expect to use AMR in the transparent way mentioned above to identify paraphrase relations.",
"However, we demonstrate in this paper that AMR can be used in a softer way to detect such relations.",
"Evaluation of AMR parsers is traditionally performed using the Smatch score (Cai and Knight, 2013).",
"However, Damonte et al. (2017) argue that more ad-hoc metrics can be useful for advancing AMR research.",
"Paraphrase detection can be seen 442 as a further benchmark for AMR parsers, highlighting their ability of abstracting away from syntax and representing the core concepts expressed in the sentence.",
"In order to advance research in AMR and its applications, it is important to have metrics that reflect on the ability of AMR graphs to have impact on subsequent tasks.",
"In this work we therefore use two different AMR parsers, comparing them throughout all experiments.",
"AMRs are rooted, edge labeled, node labeled, directed graphs.",
"They are biased towards the English language and rely on PropBank (Kingsbury and Palmer, 2002) for the definition of the main events in the sentence.",
"Nodes in an AMR graph represent events and concepts, while edges represent the relationships between them.",
"Banarescu et al. (2013) state that AMR are aimed at canonicalizing multiple ways of expressing the same idea, which could be of great assistance to solve the problem of paraphrase detection.",
"However, this goal is not entirely achieved in practice, and it will take long for AMR parsers to mature and achieve such canonicalization.",
"At the moment, for example, even a simple pair of sentences such as the boy desires the cake and the the boy wants the cake would not have the same canonical form by state-of-the-art AMR parsers.",
"While some researchers (Fodor, 1975) have doubted the practical possibility of canonicalizing language or finding identical paraphrases in English or otherwise, much work in NLP has been devoted to the problem of paraphrase identification (Mitchell and Lapata, 2010; Baroni and Lenci, 2010; Socher et al., 2011; Guo and Diab, 2012; Ji and Eisenstein, 2013) and more weakly, finding entailment between sentences and phrases (Dagan et al., 2006; Bos and Markert, 2005; Harabagiu and Hickl, 2006; Lewis and Steedman, 2013).",
"In this work, we use the AMRs parsed for given sentences as a mean to extract useful information and train paraphrase detection classifiers on top of them.",
"Our work falls under the category of distributional methods for paraphrase detection (Turney and Pantel, 2010; Mihalcea et al., 2006; Mitchell and Lapata, 2010; Guo and Diab, 2012; Ji and Eisenstein, 2013) such as with latent semantic",
"analysis (LSA, Landauer et al., 1998).",
"The main principle behind this approach is to detect semantic similarity through distributional representations for a given sentence and its potential paraphrase, where these representations are compared against each other according to some similarity metric or used as features with a discriminative classification method (Mihalcea et al., 2006; Guo and Diab, 2012; Ji and Eisenstein, 2013).",
"LSA is indeed one of the main tools in obtaining such distributional representations for the problem of paraphrase detection.",
"Most often, TF-IDF weighting has been used for building the sentence-term matrix, but Ji and Eisenstein (2013) have shown that a significant improvement can be achieved in detecting similarity if one re-weights the sentence-term matrix differently.",
"Indeed, this is one of our main contributions: we build on previous work on LSA for paraphrase detection and propose a technique to re-weight a sentence-concept matrix based on the AMR graphs for the given sentences.",
"More details on the use of LSA for paraphrase detection appear in Section 4.",
"AMR parsing is the task of converting natural language sentences into AMR graphs, which are Directed Acyclic Graphs (DAGs) in all cases except a few rare controversial cases.",
"This task embeds several common NLP problems together, such as named entity recognition, sentential-level corefer-ence resolution, semantic role labeling and word-sense disambiguation.",
"Several parsers for AMR have been recently developed (Flanigan et al., 2014; Wang et al., 2015; Peng et al., 2015; Pust et al., 2015; Goodman et al., 2016; Rao et al., 2015; Vanderwende et al., 2015; Artzi et al., 2015; Barzdins and Gosko, 2016a; Zhou et al., 2016; Damonte et al., 2017; Barzdins and Gosko, 2016b; Konstas et al., 2017).",
"Shared tasks were also organized in order to push forward the state-of-the-art (May, 2016; May and Priyadarshi, 2017).",
"Meaning representations are usually evaluated based on their compositionality (construction of a representation based on parts of the text in a consistent way), verifiability (ability to check whether a meaning representation is true in a given model of the world), unambiguity (ability to full disambiguate text into the representation in a way that does not leave any ambiguity lingering), inference (the existence of a calculus that can be used to 443 infer whether one meaning representation is logically implied by others) and canonicalization (the ability to map several surface forms, such as paraphrases, into a single unique meaning representa-tion).",
"In this paper, we evaluate AMR on its ability to canonicalize language through its assistance in deciding whether two sentences are paraphrases.",
"We note that this test is masked by the accuracy of the AMR parsers we use, which indeed do not give always fully correct predictions.",
"These errors in our paraphrase detection due to the accuracy of the AMR parser are different than those which originate in an inherent difficulty of representing paraphrases using AMR because of the limitations of the formalism and the annotation guidelines that AMR follows.",
"We experiment with two AMR parsers for which a public version is available.",
"The first is JAMR (Flanigan et al., 2014), which is a graph-based approach to AMR parsing.",
"It works by performing two steps on the input sentence: concept identification and relation identification.",
"The former discovers the concept fragments corresponding to span of words in the sentence, while the latter finds the optimal spanning connected subgraph from the concepts identified in the first step.",
"The concept identification step has quadratic complexity and the relation identification step is O ( | V | 2 log | V | ) , with | V | being the set of nodes in the AMR graph.",
"The second is AMREager (Damonte et al., 2017), which is a transition-based parser that works by scanning the string left-to-right and building the graph as the scan proceeds.",
"This transition-based system is akin to the dependency parsing transition-system ArcEager of Nivre (2004), only without constraints that ensure that the resulting structure is a tree.",
"In addition, there are operations that make the system create additional non-projective structures by checking after transition step whether siblings should be connected together with an edge.",
"The complexity of AMREager is linear in the length of the sentence.",
"AMREager was extended to other languages (Damonte and Cohen, 2018), and we leave it for future work to test the utility of AMR for paraphrase detection in these languages.",
"Let S be a set of sentences.",
"We are given input data in the form of ( x ( i ) 1 , x ( i ) 2 , b ( i ) ) for i [ n ] where n is the number of training examples, x ( i ) j S , j { 1 , 2 } and b ( i ) { 0 , 1 } is a binary indicator that tells whether x ( i ) 1 is a paraphrase of x ( i ) 2 .",
"The goal is to learn a classifier c : S S { 0 , 1 } that tells for unseen instances whether the pair of sentences given as input are paraphrases of each other.",
"We denote by [ n ] the set { 1 , . . . , n } .",
"The first step in our approach is the construction of lower-dimensional representations for the sentences in the training data.",
"We use latent semantic analysis to get the sentence representations, which are then used to detect paraphrases using a classifier.",
"More specifically, given a set of sentences S = { x ( i ) j | j { 1 , 2 } , i [ n ] } , we build a sentence-term matrix T such that T k indicates the use of the th word in the k th sentence in S .",
"The number of rows is the number of sentences in the dataset and the number of columns is the vocabulary size.",
"This follows previous work with the use of LSA for paraphrasing (Guo and Diab, 2012; Ji and Eisenstein, 2013).",
"T k is the count of the th word in the k th sentence: T k = count( , k )",
"T k is the term frequency-inverse document frequency (TF-IDF) for the k th sentence with respect to the th word.",
"TF-IDF is commonly used in Information Retrieval to score words in a document and combines the frequency of the words in a document with the rarity of the term across documents.",
"With TF-IDF, in order to have a high score, concepts must appear in this sentence and not in many others.",
"In that case, we define: T k = count( , k ) n csent( , k ) where count( , k ) gives the count of the th word in the k th sentence and csent is the 444 number of sentences which contain the th word: csent( , k ) = |{ k [ | S | ]: count( k, ) > 0 }| .",
"The AMR-based systems of Section 5 build upon this by re-weighting T k with terms depending on the AMRs of the sentences.",
"For paraphrasing, previous work (Ji and Eisenstein, 2013) has also considered the transductive setting (Gammerman et al., 1998), which we also use in our experiments.",
"In the transductive setting, S also includes the sentences on which we expect to perform the final evaluation for the purpose of learning the latent representations.",
"Note that, in this case, the labels b ( i ) are not used in the process of constructing word representations.",
"In the inductive setting, on the other hand, the sentences in the testing set are not included in training and we project them instead using the LSA projection matrices onto the latent space learned to find their representations.",
"where U R k m , V R m and R m m is a diagonal matrix of singular values.",
"The final sentence representations are the rows of the U matrix which range over the sentences and have m dimensions.",
"The output of this process is a function f : S R m which attaches to each sentence a representation.",
"The idea behind LSA is that this matrix decomposition will make semantically similar sentences to appear close in the latent space, hence alleviating the problem of data sparsity and making it easier to detect when two sentences are paraphrases of each other.",
"Once we construct the sentence representations from the training data (either in the inductive or the transductive setting) we use the function f to map each pair of sentences from the training data ( x ( i ) 1 , x ( i ) 2 ) to two vectors f ( x ( i ) 1 ) + f ( x ( i ) 2 ) and | f ( x ( i ) 1 ) f ( x ( i ) 2 ) | (where the absolute value is taken coordinate-wise) and then concatenate them into a feature vector ( x ( i ) 1 , x ( i ) 2 ) , which is then used as input to a support vector machine (SVM) classifier (Ji and Eisenstein, 2013).",
"1 5 Abstract Meaning Representation Features The main hypothesis tested in this work is that AMR can be useful in deciding whether two sentences are paraphrases of each other.",
"We investigate two ways to use AMR information to better inform the classifier: similarity-based and LSA-based.",
"An obvious way to use AMR information is to just compute the similarity between the two graphs and use the score as an additional feature.",
"As a score we use Smatch, which computes the overlap in terms of recall, precision and F-score between two unaligned graphs by finding the alignments between the graphs that maximizes the overlap.",
"The alignment step is necessary because in AMR multiple nodes can have the same labels and arbitrary variable names are used to distinguish between them.",
"Smatch is the standard metric to evaluate the overlap between AMR graphs.",
"The score returned by Smatch is used as a single additional feature for the SVM.",
"The amount of overlap in the AMR nodes of the two graphs can be a good indicator of whether the sentences are paraphrases of each other.",
"To test this hypothesis, we extract the unordered sets of AMR nodes and use the Jaccard similarity co-efficient as a feature.",
"This is directly related to the concept identification step of the AMR parsing process, which is concerned with generating and labeling the nodes of the AMR graph.",
"Concept identification is arguably one of the most challenging part of AMR parsing as the mapping between word spans and AMR nodes is not trivial (Wer-ling et al., 2015).",
"It is often considered as the first stage in the AMR parsing pipeline and it is therefore reasonable to attempt using its intermediate results.",
"We choose Jaccard as a metric for bag of concepts overlap following previous work in paraphrase detection (Achananuparp et al., 2008; Be-1 We note that while the NLP community has largely switched to the use of neural networks for classification problems, in our case support vector machines prove to be a simpler and more efficient solution. They also tend to generalize better than neural networks, as the number of features we use is not large. 445 rant and Liang, 2014).",
"We note that while this approach of using AMR to detect paraphrase may sound plausible, it does not perform very well.",
"As such, we compare and contrast this as an AMR baseline with the approach that makes use of PageRank with TF-IDF reweighting for LSA, as described next.",
"LSA The main idea is to re-weight the LSA sentence-term matrix T (Section 4) according to a probability distribution over the AMR nodes, which we accomplish by means of PageRank (Page et al., 1999).",
"The utility of re-weighting terms in the sentence-term matrix has been previously proved (Turney and Pantel, 2010).",
"PageRank is a method, originally developed for web pages, for ranking nodes in a graph according to their impact on other nodes.",
"The algorithm works iteratively by adjusting at each iteration the score of each node based on the number and scores of nearby nodes that is connected to it, until convergence.",
"Prior to applying PageRank, we merge the two graphs by collapsing the concepts in the two graphs that have the same labels, similarly to Liu et al. (2015), as shown in Figure",
"2. We then compute the PageRank score for each node in the merged graph and multiply them by the corresponding frequency count of that concept in the sentence-term matrix.",
"The graph merging step is necessary in order to ensure that overlapping concepts obtain high PageRank scores.",
"The PageRank step applied to the merged graph ensures that this importance propagates to nearby nodes.",
"For a given graph G = ( V, E ) , PageRank takes as input a list of edges between nodes: E = { ( n i , m i ) } , i = 0 , . . . , n n = | E | and outputs a PageRank score for each node by solving the following equations with respect to PG( ) : PG( n ) = X m I ( n ) PG( m ) l ( m ) where I ( n ) are the input edges to node n and l ( m ) is the number of edges coming out of m .",
"For each concept of the merged AMR graph, we compute T k , the weight for the LSA matrix introduced in Section 4, as follows: T k = PG( l, k ) count( l, k ) where PG( l, k ) is the PageRank of th concept for the k th sentence.",
"As a baseline for the PageRank system, the TF-IDF re-weighting scheme, as described in Section 4, is also used to re-weight the AMR concepts.",
"We now describe the experiments that we devised to discover whether AMR is useful for paraphrase detection.",
"For AMR parsing, we used the JAMR 2 version published for SemEval 2016 (Flanigan et al., 2016), reporting 0.67 Smatch score on LDC2015E86 and the first and only version available for AMREager, 3 obtaining 0.64 Smatch score on the same dataset.",
"First, we discuss experiments where the AMRs are used as a mean to extract additional sparse features for a SVM classifier.",
"Then we turn to LSA to construct a representation of the sentence based on the reweighting on the AMR nodes achieved through either PageRank or TF-IDF.",
"Results show how the latter, which builds on state-of-the-art systems for this task, is a much more promising approach.",
"Finally, we analyze how performance changes as a function of the number of dimensions used in the truncated matrix.",
"For evaluation, we use the Microsoft Research Paraphrase Corpus (Dolan et al., 2004).",
"We use 70% of the dataset as training data and 30% as a test set.",
"The total number of sentence pairs in the corpus is 5,801.",
"The Bag of words (BOW) baseline consists of a SVM that takes into account one single feature: the Jaccard score between the BOW representations for the two sentences, i.e., one-hot vectors indicating whether each word in the vocabulary is used or not.",
"The use of the single Jaccard feature means that for the linear kernel we just learn a threshold on the score.",
"We note that the addition of the similarity-based features does not suffice to outperform the BOW baseline, as described in Table",
"1. Unlike Smatch, the bag of concepts feature does not need to find a, possibly wrong, alignment between the two graphs 2 JAMR is available from https://github.com/ jflanigan/jamr .",
"3 AMREager is available from http://cohort.inf.",
"ed.ac.uk/amreager.html .",
"because it considers the node labels only.",
"Interestingly, the addition of the bag of concepts feature is beneficial only for AMREager.",
"It is indeed worth noting the different behaviors of the two parsers: when using the Smatch score only, JAMR reports slightly higher numbers than AMREager.",
"However, when using the bag of concepts features too, AMREager is considerably better than JAMR, which is unexpected as the concept identification performance of the two parsers is reported to be identical (Damonte et al., 2017).",
"There is also some variability with the kernel used for the SVM classifier.",
"The polynomial kernel does consistently better than the RBF and linear kernel.",
"This means that a low-level interaction between the sentence representations does exist (when trying to determine whether they are para-phrases), but a higher order interaction, such as implied with RBF, is not necessary to be modeled.",
"We now turn to experiments involving LSA as a mean to represent the candidate paraphrases.",
"In this set of experiments, the baseline consists of using TF-IDF to weight the bag of words in the sentence-term matrix.",
"We first try to replace the bag of words with the bag of concepts from the AMR graphs, also re-weighted by TF-IDF.",
"Then, we also replace the TF-IDF with PageRank as it is more appropriate to re-weight graph structures than TF-IDF.",
"We report experiments for both inductive setting and transductive setting (Table 3).",
"Our first finding is that, regardless of the parser, AMR is very helpful in the tranductive setting while it is harmful in the inductive setting.",
"When using bag of words, it is easy to project sentences of the test set into the latent space learned on the training set only.",
"However, our experiments indicate that this is not as easy with the AMR concepts produced by the two parsers.",
"On the other hand, when the latent space is learned using also the sentences in the test set, the abstractive power of AMRs is helpful for this task.",
"In the inductive setting, PageRank fails to improve over the TF-IDF scheme and neither of them outperform the BOW baseline.",
"AMREager outperforms JAMR in this case.",
"In the transduc-447 kernel acc.",
"tive case, the AMRs provided by JAMR are helpful with both TF-IDF and PageRank, while the graphs provided by AMREager give good results only for the PageRank scheme.",
"The best result is achieved with JAMR, PageRank and a linear kernel for the SVM classifier.",
"We wanted to test in our experiments whether the same gains that are achieved with AMR parsing can also be achieved with just a syntactic parser.",
"To test that, we parsed the paraphrase dataset with a dependency parser and reduced the syntactic parse trees to AMR graphs (meaning, we represented the dependency trees as graphs by representing each word as a node and labeled dependency relations as edges).",
"Figure 3 gives an example of such conversion.",
"As can be see, the AMR-like representation for the dependency trees retains words such as determiners (the).",
"It also uses a different set of relations, as reflected by the edge labels that the dependency parser returns.",
"We chose to do this reduction instead of directly building a classifier that makes use of the dependency trees to ensure we are conducting a controlled experiment in which we precisely compare the use of syntax for paraphrase against the use of semantics.",
"Once the syntactic trees are converted",
"to AMR graphs, the same code is used to run the experiments as in the case of AMR parsing, with both the PageRank and TF-IDF reweighting settings.",
"We used the dependency parser from the Stanford CoreNLP (Manning et al., 2014).",
"The results are given in Table 3, under dep.",
"As can be seen, these results lag behind the bag-of-words model in the inductive case and the AMR models in the transductive case.",
"This could be attributed to AMR parsers better abstracting away from the surface form than dependency parsers.",
"Figure 4 shows how performance changes as function of the number of dimensions used in the",
"truncated matrix U (Section 4).",
"More specifically, on the x axis of the plots we have m/l , where m is the number of columns in the truncated matrix and l the number of words in the vocabulary.",
"The plot shows that the performance stays stable for inductive inference.",
"With transductive inference, however, performance peaks when m is very close to the vocabulary size.",
"This shows that, in order to achieve good results, it is not necessary to remove a large number of columns from the original sentence-term matrix.",
"The plot gives us more evidence on how the inductive setting is not ideal for the AMR-based approach.",
"For the TF-IDF reweighting, the systems that show a considerably different behavior are JAMR with linear and RBF kernels, where we show clear peaks for the transductive case.",
"For PageRank also the AMREager systems with linear and RBF kernel follow this trend.",
"In general the polynomial kernel is the one less affected by this variable.",
"Table 2 shows that our best result for the transductive case, which we obtain with JAMR and PageRank, outperforms the current state of the art for paraphrase detection in the transductive setting.",
"This is not true for the inductive case, proving the preference of the AMR-based LSA approach for the former setting.",
"We described an approach to incorporate an AMR parser output into the detection of paraphrases.",
"Our method works by merging two graphs that need to be tested for a paraphrase relation, and then re-weighting a sentence-term matrix by the PageRank values of the nodes in the merged graph.",
"We find that our method gives significant improvements over state of the art in paraphrase detection in the transductive setting, showing that AMR is indeed helpful for this task.",
"We further show that the inductive settings is instead not ideal for this type of approach.",
"We are encouraged by the results, and believe that paraphrase detection can also be used as a proxy test for the performance of an AMR parser: if an AMR parser is close to canonicalizing language, it should be of significant help in detecting 449 0.2 0.4 0.6 0.8 0 .",
"paraphrase relations.",
"In our experiments, the overall best result was achieved by JAMR.",
"More generally, our results show that JAMR has been more helpful in the transductive setting and in the first set of experiment when using the Smatch score only, while AMREager wins the comparison in the inductive case as well as in the first set of experiments when using both the Smatch score and the bag of concepts score as additional features.",
"The authors would like to thank the three anonymous reviewers for their helpful comments.",
"This research was supported by a grant from Bloomberg, a grant from Huawei Technologies and by the EU H2020 project SUMMA, under grant agreement 688139."
] | [
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"objective",
"other",
"other"
] |
[
"Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks.",
"In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level relations.",
"We experiment with 7 pretrained LMs, 4 languages, and 7 discourse probing tasks, and find BART to be overall the best model at capturing discourse but only in its encoder, with BERT performing surprisingly well as the baseline model.",
"Across the different models, there are substantial differences in which layers best capture discourse information, and large disparities between models.",
"The remarkable development of pretrained language models (Devlin et al., 2019; Lewis et al., 2020; Lan et al., 2020) has raised questions about what precise aspects of language these models do and do not capture.",
"Probing tasks offer a means to perform fine-grained analysis of the capabilities of such models, but most existing work has focused on sentence-level analysis such as syntax (Hewitt and Manning, 2019; Jawahar et al., 2019; de Vries et al., 2020), entities/relations (Papanikolaou et al., 2019), and ontological knowledge (Michael et al., 2020).",
"Less is known about how well such models capture broader discourse in documents.",
"Rhetorical Structure Theory is a framework for capturing how sentences are connected and describing the overall structure of a document (Mann and Thompson, 1986).",
"A number of studies have used pretrained models to classify discourse markers (Sileo et al., 2019) and discourse relations (Nie et al., 2019; Shi and Demberg, 2019), but few (Koto et al., to appear) have systematically investigated the ability of pretrained models to model discourse structure.",
"Furthermore, existing work relating to discourse probing has typically focused exclusively Model Type #Param #Data Objective BERT Enc 110M 16GB MLM+NSP RoBERTa 110M 160GB MLM ALBERT 12M 16GB MLM+SOP ELECTRA 110M 16GB MLM+DISC GPT-2 Dec 117M 40GB LM BART Enc+Dec 121M 160GB DAE T5 110M 750GB DAE Table 1: Summary of all English pretrained language models used in this work.",
"on the BERT-base model, leaving open the question of how well these findings generalize to other models with different pretraining objectives, for different languages, and different model sizes.",
"Our research question in this paper is: How much discourse structure do layers of different pretrained language models capture, and do the findings generalize across languages?",
"There are two contemporaneous related studies that have examined discourse modelling in pretrained language models.",
"Upadhye et al. (2020) analyzed how well two pretrained models capture referential biases of different classes of English verbs.",
"Zhu et al. (2020) applied the model of Feng and Hirst (2014) to parse IMDB documents (Maas et al., 2011) into discourse trees.",
"Using this (po-tentially noisy) data, probing tasks were conducted by mapping attention layers into single vectors of document-level rhetorical features.",
"These features, however, are unlikely to capture all the intricacies of inter-sentential abstraction as their input is formed based on discourse relations 1 and aggregate statistics on the distribution of discourse units.",
"To summarize, we introduce 7 discourse-related probing tasks, which we use to analyze 7 pretrained language models over 4 languages: English, Mandarin Chinese, German, and Spanish.",
"Code and public-domain data associated with this research is available at https://github.com/fajri91/discourse_ probing.",
"We outline the 7 pretrained models in Table",
"1. They comprise 4 encoder-only models: BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2020), and ELECTRA (Clark et al., 2020); 1 decoder-only model: GPT-2 (Rad-ford et al., 2019); and 2 encoderdecoder models: BART (Lewis et al., 2020) and T5 (Raffel et al., 2019).",
"To reduce the confound of model size, we use pretrained models of similar size ( 110m model parameters), with the exception of ALBERT which is designed to be lighter weight.",
"All models have 12 transformer layers in total; for BART and T5, this means their encoder and decoder have 6 layers each.",
"Further details of the models are provided in the Supplementary Material.",
"We experiment with a total of seven probing tasks, as detailed below.",
"Tasks 46 are component tasks of discourse parsing based on rhetorical structure theory (RST; Mann and Thompson (1986)).",
"In an RST discourse tree, EDUs are typically clauses or sentences, and are hierarchically connected with discourse labels denoting: (1) nuclearity = nucleus (N) vs. satellite (S); 2 and (2) discourse relations (e.g. elaborate ).",
"An example of a binarized RST discourse tree is given in Figure",
"1.",
"1. Next sentence prediction.",
"Similar to the next sentence prediction (NSP) objective in BERT pretraining, but here we frame it as a 4-way classification task, with one positive and 3 negative candidates for the next sentence.",
"The preceding context takes the form of between 2 and 8 sentences, but the candidates are always single sentences.",
"2. Sentence ordering.",
"We shuffle 37 sentences and attempt to reproduce the original order.",
"This task is based on Barzilay and Lapata (2008) and Koto et al. (2020), and is assessed based on rank correlation relative to the original order.",
"2 The satellite is a supporting EDU for the nucleus.",
"or , or although (Nie et al., 2019), representing the conceptual relation between the sen-tences/clauses.",
"4. RST nuclearity prediction.",
"For a given ordered pairing of (potentially complex) EDUs which are connected by an unspecified relation, predict the nucleus/satellite status of each (see Figure 1).",
"5. RST relation prediction.",
"For a given ordered pairing of (potentially complex) EDUs which are connected by an unspecified relation, predict the relation that holds between them (see Figure 1).",
"6. RST elementary discourse unit (EDU) segmentation.",
"Chunk a concatenated sequence of EDUs into its component EDUs.",
"7. Cloze story test.",
"Given a 4-sentence story context, pick the best ending from two possible options (Mostafazadeh et al., 2016; Sharma et al., 2018).",
"This task is harder than NSP, as it requires an understanding of commonsense and storytelling (Chaturvedi et al., 2017; Liu et al., 2018).",
"We summarize all data (sources, number of labels, and data split) in Table",
"2. This includes English, Chinese, German, and Spanish for each probing task.",
"For NSP and sentence ordering, we generate data from news articles and Wikipedia.",
"For the RST tasks, we use discourse treebanks for each of the four languages.",
"We formulate all probing tasks except sentence ordering and EDU segmentation as a classification problem, and evaluate using accuracy.",
"During fine-tuning, we add an MLP layer on top of the pretrained model for classification, and only update the MLP parameters (all other layers are frozen).",
"We use the [CLS] embedding for BERT and ALBERT following standard practice, while for other models we perform average pooling to obtain a vector for each sentence, and concatenate them as the input to the MLP.",
"3 3 BERT and ALBERT performance with average pooling For sentence ordering, we follow Koto et al. (2020) and frame it as a sentence-level sequence labelling task, where the goal is to estimate P ( r | s ) , where r is the rank position and s the sentence.",
"The task has 7 classes, as we have 37 sentences (see Section 3).",
"At test time, we choose the label sequence that maximizes the sequence probability.",
"Sentence embeddings are obtained by average pooling.",
"The EDU segmentation task is also framed as a binary sequence labelling task (segment boundary or not) at the (sub)word level.",
"We use Spearman rank correlation and macro-averaged F1 score to evaluate sentence ordering and EDU segmentation, respectively.",
"We use a learning rate 1 e 3 , warm-up of 10% of total steps, and the development set for early stopping in all experiments.",
"All presented results are averaged over three runs.",
"4 5 Results and Analysis In Figure 2, we present the probing task performance on English for all models based on a representation generated from each of the 12 layers of the model.",
"First, we observe that most performance fluctuates (non-monotonic) across layers except for some models in the NSP task and some ALBERT results in the other probing tasks.",
"We also found that most models except ALBERT tend to have a very low standard deviation based on three runs with different random seeds.",
"We discover that all models except T5 and early layers of BERT and ALBERT perform well over the NSP task, with accuracy 0.8, implying it is a simple task.",
"However, they all struggle at sentence ordering (topping out at 0 . 4 ), suggesting that they are ineffective at modelling discourse over multiple sentences; this is borne out in Figure 4, where performance degrades as the number of sentences to re-order increases.",
"Interestingly, for Discourse Connectives, RST Nuclearity, and RST Relation Prediction, the models produce similar patterns, even though the discourse connective data is derived from a different dataset and theoretically divorced from RST.",
"BART outperforms most other models in layers 16 for these tasks (a similar observation is found for NSP and Sentence Ordering) with BERT and ALBERT struggling particularly in the earlier layers.",
"For is in included in the Appendix.",
"EDU segmentation, RoBERTa and again the first few layers of BART perform best.",
"For the Cloze Story Test, all models seem to improve as we go deeper into the layers, suggesting that high-level story understanding is captured deeper in the models.",
"We summarize the overall performance by calculating the averaged normalized scores in the last plot in Figure",
"2. 5 RoBERTa and BART appear to be the best overall models at capturing discourse information, but only in the encoder layers (the first 6 layers) for BART.",
"We hypothesize that the BART decoder focuses on sequence generation, and as such is less adept at language understanding.",
"This is supported by a similar trend for T5, also a denoising autoencoder.",
"BERT does surprisingly well (given that it's the baseline model), but mostly in the deeper layers (710), while ELECTRA performs best at the three last layers.",
"In terms of the influence of training data, we see mixed results.",
"BART and RoBERTa are the two best models, and both are trained with more data than most models (an order of magnitude more; see Table 1).",
"But T5 (and to a certain extent GPT-2) are also trained with more data (in fact T5 has the most training data), but their discourse modelling performance is underwhelming.",
"In terms of training objectives, it appears that a pure decoder with an LM objective (GPT-2) is less effective at capturing discourse structure.",
"ALBERT, the smallest model (an order of magnitude less parameters than most), performs surprisingly well (with high standard de-viation), but only at its last layer, suggesting that discourse knowledge is concentrated deep inside the model.",
"Lastly, we explore whether these trends hold if we use a larger model (BERT-base vs. BERT-large) and for different languages (again based on monolingual BERT models for the respective languages).",
"Results are presented in Figure",
"3. For model size (English (large) vs. English), the overall pattern is remarkably similar, with a slight uplift in absolute results with the larger model.",
"Between the 4 different languages (English, Chinese, German, and Spanish), performance varies for all tasks except for NSP (e.g. EDU segmentation appears to be easiest in Chinese, and relation prediction is the hardest in German), but the shape of the lines is largely the same, indicating the optimal layers for 5 Given a task, we perform minmax normalization for all model-layer scores (7 12 scores in total), and then compute the average over all tasks for each model's layer.",
"a particular task are consistent across languages.",
"We perform probing on 7 pretrained language models across 4 languages to investigate what discourse effects they capture.",
"We find that BART's encoder and RoBERTa perform best, while pure language models (GPT-2) struggle.",
"Interestingly, we see a consistent pattern across different languages and model sizes, suggesting that the trends we found are robust across these dimensions.",
"We are grateful to the anonymous reviewers for their helpful feedback and suggestions.",
"The first author is supported by the Australia Awards Scholarship (AAS), funded by the Department of Foreign Affairs and Trade (DFAT), Australia.",
"This research was undertaken using the LIEF HPC-GPGPU Facility hosted at The University of Melbourne.",
"This facility was established with the assistance of LIEF Grant LE170100200."
] | [
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"result",
"other",
"other",
"other",
"other"
] |
[
"Neural Machine Translation (NMT) currently exhibits biases such as producing translations that are too short and overgenerating frequent words, and shows poor robustness to copy noise in training data or domain shift.",
"Recent work has tied these shortcomings to beam search the de facto standard inference algorithm in NMT and Eikema and Aziz (2020) propose to use Minimum Bayes Risk (MBR) decoding on unbiased samples instead.",
"In this paper, we empirically investigate the properties of MBR decoding on a number of previously reported biases and failure cases of beam search.",
"We find that MBR still exhibits a length and token frequency bias, owing to the MT metrics used as utility functions, but that MBR also increases robustness against copy noise in the training data and domain shift.",
"1 1 Introduction Neural Machine Translation (NMT) currently suffers from a number of issues such as underestimating the true length of translations (Koehn and Knowles, 2017; Stahlberg and Byrne, 2019; Kumar and Sarawagi, 2019), underestimating the probability of rare words and over-generating very frequent words (Ott et al., 2018), or being susceptible to copy noise in the training data (Khayrallah and Koehn, 2018).",
"In out-of-domain translation, hallucinations (translations that are fluent but unrelated to the source) are common (Koehn and Knowles, 2017; Lee et al., 2018; Muller et al., 2020).",
"Previous work has addressed these problems with decoding heuristics such as length normalization (Wu et al., 2016), data cleaning (Junczys-Dowmunt, 2018; Banon et al., 2020) or model regularization (Bengio et al., 2015; Shen et al., 2016; 1 Code and documentation available at https:// github.com/ZurichNLP/understanding-mbr Wiseman and Rush, 2016; Zhang et al., 2019; Ng et al., 2020).",
"Recently, Eikema and Aziz (2020) have highlighted the role of the decision rule, namely searching for the highest-scoring translation, and have argued that it is at least partially to blame for some of these biases and shortcomings.",
"They found that sampling from an NMT model is faithful to the training data statistics, while beam search is not.",
"They recommend the field look into alternative inference algorithms based on unbiased samples, such as Minimum Bayes Risk (MBR) decoding.",
"We believe MBR has potential to overcome several known biases of NMT.",
"More precisely, if a bias can be understood as being caused by the mode-seeking nature of beam search then we hypothesize that MBR could exhibit less bias.",
"We view short translations, copies of the source text and hallucinations as hypotheses that are probable, but quite different to other probable hypotheses.",
"If such pathological hypotheses are in a pool of samples, it is unlikely that MBR would select them as the final translation.",
"While Eikema and Aziz (2020) compare the statistical properties of samples and beam search outputs, and show that MBR can perform favourably compared to beam search according to automatic metrics, our paper aims to perform a targeted study of MBR and its properties, specifically its effects on the biases and shortcomings discussed previously.",
"In our experiments we find that If used with a utility function that favours short translations, MBR inherits this bias; MBR still exhibits a token probability bias in that it underestimates the probability of rare tokens and overestimates very common tokens; Compared to beam search, MBR decoding is more robust to copy noise in the training data; MBR exhibits higher domain robustness than beam search.",
"We demonstrate that MBR reduces the amount of hallucinated content in translations.",
"The de facto standard decoding algorithm in NMT is beam search (Graves, 2012; Boulanger-Lewandowski et al., 2013; Sutskever et al., 2014).",
"Beam search belongs to a broader class of inference procedures called maximum-a-posteriori (MAP) algorithms.",
"What MAP algorithms have in common is that they attempt to find the most probable translation under a given model.",
"Essentially, they try to recover the mode of the output distribution over sequences.",
"An exact solution to this search problem is usually intractable.",
"Beam search is an approximation that is tractable, but it also frequently fails to find the true mode of the distribution (Stahlberg and Byrne, 2019).",
"NMT systems are known to be deficient in a number of ways.",
"We describe here only the ones relevant to our discussion and experiments.",
"Length bias: Systems underestimate the true length of translations.",
"On average, their translations are shorter than references (Koehn and Knowles, 2017; Stahlberg and Byrne, 2019; Kumar and Sarawagi, 2019).",
"Skewed word frequencies: In translations, tokens that occur frequently in the training data are overrepresented.",
"On the other hand, rare tokens occur fewer times than their probability in the training data would suggest (Ott et al., 2018).",
"Beam search curse: Increasing the beam size leads to finding translations that are more probable under the model.",
"In theory, this should improve translation quality.",
"Paradoxically, empirical results show that large beam sizes decrease quality (Koehn and Knowles, 2017; Ott et al., 2018).",
"Susceptibility to copy noise: Copied content in the training data disproportionately affects translation quality.",
"More specifically, the most detrimental kind are copies of the source sentence on the target side of the training data (Khayrallah and Koehn, 2018).",
"If such copies are present in the training data, copy hypotheses will be overrepresented in beam search (Ott et al., 2018).",
"Low domain robustness: Systems are not robust under distribution shifts such as domain shift.",
"Having a system translate in an unknown test domain often does not gradually degrade translation quality, but leads to complete failure cases called hallucinations (Lee et al., 2018; Koehn and Knowles, 2017; Muller et al., 2020).",
"Much past research has attributed those deficiencies to model architectures or training algorithms, while treating beam search as a fixed constant in experiments.",
"In contrast, Eikema and Aziz (2020) argue that the fit of the model is reasonable, which means that neither the model itself nor its training can be at fault.",
"Rather, they argue that the underlying problem is beam search.",
"Inadequacy of the mode: Stahlberg and Byrne (2019) and Eikema and Aziz (2020) suggest that the mode of the distribution over output sequences is in fact not the best translation.",
"On the contrary, it seems that in many cases the mode is the empty sequence (Stahlberg and Byrne, 2019).",
"In addition, it appears that the probability of the mode is not much different from very many other sequences, as the output distribution is quite flat in an extensive region of output space (Eikema and Aziz, 2020).",
"Intuitively, it makes sense that such a situation could arise in NMT training: maximum likelihood estimation training does not constrain a model to be characterized well by its mode only.",
"If the mode is inadequate, then obviously that is problematic for a mode-seeking procedure such as beam search, and MAP inference in general.",
"In fact, MAP decoding should be used only if the mode of the output distribution can be trusted (Smith, 2011).",
"An alternative is a decision rule that considers how different a translation is from other likely translations.",
"MBR decoding was used in speech recognition (Goel and Byrne, 2000) and statistical machine translation (Kumar and Byrne, 2004; Tromble et al., 2008).",
"More recently, MBR was also used to improve beam search decoding in NMT (Stahlberg et al., 2017; Shu and Nakayama, 2017; Blain et al., 2017).",
"Eikema and Aziz (2020) are the first to test a variant of MBR that operates on samples instead of an nbest list generated by beam search.",
"We give here a simplified, accessible definition of MBR in the context of NMT.",
"Essentially, the goal of MBR is to find not the most probable translation, but the one that minimizes the expected risk for a given loss function and the true posterior distribution.",
"In practice, the set of all possible candidate translations can be approximated by drawing from the model a pool of samples S of size n : S = ( s 1 , ..., s n ) p ( y | x, ) .",
"The same set of samples can also be used to approximate the true posterior distribution.",
"Then for each sample s i in S , its expected utility (the inverse risk) is computed by comparing it to all other samples in the pool.",
"The sample with the highest expected utility is selected as the final translation: y (cid:63) = argmax s i S 1 n n (cid:88) s j =1 u ( s i , s j ) (2) The size of the pool n and the utility function u are hyperparameters of the algorithm.",
"A particular utility function typically computes the similarity between a hypothesis and a reference translation.",
"Therefore, MBR can be thought of as selecting a consensus translation [...] that is closest on average to all likely translations (Kumar and Byrne, 2004).",
"We hypothesize that MBR decoding is useful for a certain class of failure cases encountered with beam search.",
"Namely, if an incorrect translation from beam search can be characterized as a hypothesis that is likely but fairly different from other hypotheses with similar probability, then MBR is expected to improve over beam search.",
"Several known deficiencies of NMT systems outlined in Section 2.2 belong to this class of beam search failures.",
"For instance, length bias occurs when a beam search translation is shorter than other hypotheses with comparable probability.",
"Likewise, translations that are copies of the input sentence or hallucinations (translations that are fluent, but unrelated to the input) can be avoided with MBR if they are not common in a pool of samples.",
"Finally, we study the skewedness of token frequencies in translations.",
"Eikema and Aziz (2020) study lexical biases in NMT models, showing that model samples have higher agreement with the training distribution than MAP output.",
"We investigate whether this is also true for MBR decoding, focusing on the well-known bias towards frequent tokens.",
"We use data for a number of language pairs from the Tatoeba Challenge (Tiedemann, 2020).",
"Individual language pairs are fairly different in terms of language families, scripts and training set sizes.",
"See Appendix A for details about our data sets.",
"For one additional experiment on out-of-domain robustness we use data from Muller et al. (2020).",
"This data set is German-English and defines 5 different domains of text (medical, it, koran, law and subtitles).",
"Following Muller et al. (2020) we train our model on the medical domain, and use data in other domains to test domain robustness.",
"We hold out a random sample of the training data for testing purposes.",
"The size of this sample varies between 1k and 5k sentences, depending on the overall size of the training data.",
"Our preprocessing and model settings are inspired by OPUS-MT (Tiedemann and Thottingal, 2020).",
"We use Sentencepiece (Kudo, 2018) with subword regularization as the only preprocessing step, which takes care of both tokenization and subword segmentation.",
"The desired number of pieces in the vocabulary varies with the size of the data set.",
"We train NMT models with Sockeye 2 (Domhan et al., 2020).",
"The models are standard Transformer models (Vaswani et al., 2017), except that some settings (such as word batch size and dropout rate) vary with the size of the training set.",
"Following Eikema and Aziz (2020) we disable label smoothing so as to get unbiased samples.",
"In all experiments, we compare beam search to MBR decoding and in most cases also to single samples.",
"For beam search, we always use a beam size of",
"5. Single samples are drawn at least 100 times to show the resulting variance.",
"If not stated otherwise, all results presented are on a test set held out from the training data, i.e. are certainly in-domain, which avoids any unintended out-of-domain effects.",
"We evaluate automatic translation quality with BLEU (Papineni et al., 2002), CHRF (Popovic, 2016) and METEOR (Denkowski and Lavie, 2014).",
"We compute BLEU and CHRF with SacreBLEU (Post, 2018).",
"See Appendix B for details.",
"MBR also depends on samples, so we repeat each MBR experiment twice to show the resulting variance.",
"We also vary the number of samples used with MBR, from 5 to 100 in increments of",
"5. Finally, we produce MBR translations with different utility functions.",
"All of the utility functions are sentence-level variants of our evaluation metrics: BLEU, CHRF or METEOR.",
"See Table 1 for an overview of utility functions.",
"If not stated otherwise, MBR results are based on 100 samples and use chrf-1 as the utility function.",
"We evaluate MBR decoding with different utility functions.",
"There is no single utility function which performs best on all evaluation metrics.",
"Instead, any of our evaluation metrics can be optimized by choosing a closely related utility function (see Figure 2 and Appendix D).",
"For instance, chrf-2 as the utility function leads to the best CHRF2 evaluation scores.",
"Number of samples: We find that the translation quality of MBR increases steadily as the number of samples grows (see Figure 2).",
"This means that MBR does not suffer from the beam search curse where single pathological hypotheses in a large beam can jeopardize translation quality.",
"We analyze the lengths of translations produced by different decoding methods in Table 2 (see Appendix E for additional statistics).",
"We find that in terms of mean length of translations, beam search underestimates the true length of translations, even when hypotheses are normalized.",
"Hypotheses generated by sampling better match the reference length.",
"This is in line with the findings of Eikema and Aziz (2020).",
"For MBR decoding, it is clear that the choice of utility function has an impact on the mean length of the resulting translations.",
"For instance, employing sentence-level BLEU as the utility function leads to translations that are too short.",
"BLEU is a precision-based metric known to prefer shorter translations on the sentence level (Nakov et al., 2012).",
"chrf-2 and meteor emphasize recall more, and the resulting MBR translations overestimate the true length of translations.",
"2 On the other hand, chrf-0.5 , a CHRF variant with a bias for precision, leads to the shortest translations overall.",
"We test whether we can reduce length biases by symmetrizing our utility functions u as follows: u sym ( s i , s j ) = H ( u ( s i , s j ) , u ( s j , s i )) (3) where H is the harmonic mean.",
"This should avoid favouring either recall or precision, but in practice even symmetric utility functions lead to translations that are shorter than references on average.",
"Based on these observations we conclude that MBR inherits length biases associated with its utility function .",
"2 While Popovi c (2016) find that the recall-biased CHRF2 achieves the highest correlation with human judgments as an evaluation metric, this does not entail that the same recall bias is optimal in the utility function for MBR.",
"Beam search overgenerates tokens that are very common in the training data and undergenerates rare tokens (see Section 2.2).",
"Sampling on the other hand assigns correct probabilities to common and rare tokens.",
"Given that MBR is based on samples, does it share this property with sampling?",
"In Figure 3 we show that this is not the case.",
"Although the skewedness of probabilities is less severe for MBR than for beam search, MBR still assigns too high a probability to frequent events.",
"A reason for this is that our utility functions are based on surface similarity between samples, so rare tokens, which will be sampled rarely, will thus also have low utility.",
"Unfortunately, there is a trade-off between correct probability statistics for very common and very rare words and translation quality .",
"The most faithful statistics can be obtained from sampling, but sampling leads to the worst overall translation quality.",
"In general, as the number of samples grows, MBR approaches but does not outperform beam search on our in-domain data (see Figure 1).",
"On our out-of-domain data, the gap between MBR and beam search is smaller.",
"We hypothesize that MBR may be useful for out-of-domain translation.",
"We evaluate MBR on a domain robustness benchmark by Muller et al. (2020).",
"Figure 4 shows that on this benchmark MBR outperforms beam search on 2 out of 4 unknown test domains.",
"A possible reason why MBR is able to outperform beam search in unknown domains is that it reduces hallucinated translations.",
"To test this hypothesis, we define a hallucination as a translation that has a CHRF2 score of less than 0 .",
"01 when compared to the reference, inspired by Lee et al. (2018).",
"Given this definition of hallucination, Figure 5 shows that on average, MBR assigns a lower utility score to hypotheses that are hallucinations.",
"Similarly, MBR reduces the percentage of hallucinations found in the final translations, compared to beam search or sampling.",
"To summarize, we find that MBR decoding has a higher domain robustness than beam search .",
"If copies of source sentences are present on the target side of training data, copies are overrepresented in beam search (Section 2.2).",
"Here we test whether MBR suffers from this copy bias as well.",
"We create several versions of our training sets where source copy noise is introduced with a proba-Figure 4: CHRF1 scores of MBR and beam search on the domain robustness benchmark of Muller et al. (2020).",
"The medical test set is in-domain, the remaining sets are out-of-domain.",
"bility between 0.1% and 50%.",
"As shown in Figure 6, MBR and beam search are comparable if there are few copies in the training data.",
"However, if between 5 and 25% of all training examples are copies, then MBR outperforms beam search by a large margin ( > 10 BLEU for Arabic-German).",
"As further evidence for the ability of MBR to tolerate copy noise we present an analysis of copies in Figure 7.",
"We define a copy as a translation with a word overlap with the reference of more than 0 .",
"9 .",
"We show that MBR assigns a much lower utility to copy hypotheses than to all hypotheses taken together.",
"In the final translations, MBR manages to reduce copies substantially.",
"For instance, if around 10% of the training examples are copies, beam search produces around 50% copies, while MBR reduces this number to below 10%.",
"We conclude from this experiment that MBR is more robust to copy noise in the training data .",
"We acknowledge that this setting is artificial because copy noise can easily be removed from data sets.",
"Nonetheless, it is a striking example of a known shortcoming of NMT systems usually attributed to the model or training procedure, when in fact beam search is at least partially to blame.",
"MBR decoding has recently regained attention in MT as a decision rule with the potential to overcome some of the biases of MAP decoding in NMT.",
"We empirically study the properties of MBR decoding with common MT metrics as utility functions, and find it still exhibits a length bias and token frequency bias similar to beam search.",
"The length bias is closely tied to the utility function.",
"However, we also observe that MBR decoding successfully mitigates a number of well-known failure modes of NMT, such as spurious copying, or hallucinations under domain shift.",
"The mechanism by which MBR achieves such robustness is that copies or hallucinated hypotheses in a pool of samples are assigned low utility and never selected as the final translation.",
"In our experiments, MBR did not generally outperform beam search according to automatic metrics, but we still deem it a promising alternative to MAP decoding due to its robustness.",
"For future work, we are interested in exploring more sophisticated similarity metrics to be used as utility functions, including trainable metrics such as COMET (Rei et al., 2020), and investigating how these utility functions affect the overall quality and biases of translations.",
"We will not only release the source code used to train our models (as is common in NLP papers at the moment), but a complete pipeline of code that can be run on any instance in a fully automated fashion.",
"This will allow to reproduce our results, including the graphs and tables shown in this paper, in a consistent way with minimal changes.",
"We encourage the community to attempt to reproduce our results and publish the results.",
"This work has received funding from the Swiss National Science Foundation (grant numbers 105212-169888 and 176727 ).",
"Also, we have been assisted by the computing services of the University of Zurich (S3IT).",
"We would like to thank Bryan Eikema for his help with our implementation of MBR.",
"We also thank Jorg Tiedemann, Annette Rios and Tannon Kew for helpful comments and discussion."
] | [
"abstain",
"abstain",
"objective",
"result",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Recent QA with logical reasoning questions requires passage-level relations among the sentences.",
"However, current approaches still focus on sentence-level relations interacting among tokens.",
"In this work, we explore aggregating passage-level clues for solving logical reasoning QA by using discourse-based information.",
"We propose a discourse-aware graph network (DAGN) that reasons relying on the discourse structure of the texts.",
"The model encodes discourse information as a graph with elementary discourse units (EDUs) and discourse relations, and learns the discourse-aware features via a graph network for downstream QA tasks.",
"Experiments are conducted on two logical reasoning QA datasets, ReClor and LogiQA, and our proposed DAGN achieves competitive results.",
"The source code is available at https://github.com/Eleanor-H/DAGN.",
"A variety of QA datasets have promoted the development of reading comprehensions, for instance, SQuAD (Rajpurkar et al., 2016), HotpotQA (Yang et al., 2018), DROP (Dua et al., 2019), and so on.",
"Recently, QA datasets with more complicated reasoning types, i.e., logical reasoning, are also introduced, such as ReClor (Yu et al., 2020) and LogiQA (Liu et al., 2020).",
"The logical questions are taken from standardized exams such as GMAT and LSAT, and require QA models to read complicated argument passages and identify logical relationships therein.",
"For example, selecting a correct assumption that supports an argument, or finding out a claim that weakens an argument in a passage.",
"Such logical reasoning is beyond the capability of most of the previous QA models which focus on reasoning with entities or numerical keywords.",
"A main challenge for the QA models is to uncover the logical structures under passages, such as identifying claims or hypotheses, or pointing out flaws in arguments.",
"To achieve this, the QA models should first be aware of logical units, which can be sentences or clauses or other meaningful text spans, then identify the logical relationships between the units.",
"However, the logical structures are usually hidden and difficult to be extracted, and most datasets do not provide such logical structure annotations.",
"An intuitive idea for unwrapping such logical information is using discourse relations.",
"For instance, as a conjunction, because indicates a causal relationship, whereas if indicates a hypothetical relationship.",
"However, such discourse-based information is seldom considered in logical reasoning tasks.",
"Modeling logical structures is still lacking in logical reasoning tasks, while current opened methods use contextual pre-trained models (Yu et al., 2020).",
"Besides, previous graph-based methods (Ran et al., 2019; Chen et al., 2020a) that construct entity-based graphs are not suitable for logical reasoning tasks because of different reasoning units.",
"In this paper, we propose a new approach to solve logical reasoning QA tasks by incorporating discourse-based information.",
"First, we construct discourse structures.",
"We use discourse relations from the Penn Discourse TreeBank 2.0 (PDTB 2.0) (Prasad et al., 2008) as delimiters to split texts into elementary discourse units (EDUs).",
"A logic graph is constructed in which EDUs are nodes and discourse relations are edges.",
"Then, we propose a Discourse-Aware Graph Network (DAGN) for learning high-level discourse features to represent passages.The discourse features are incorporated with the contextual token features from pre-trained language models.",
"With the enhanced features, DAGN predicts answers to logical questions.",
"Our experiments show that DAGN surpasses current opened methods on two recent logical rea-Large-scale Pre-trained Model A signaldetailed [while] digital systemunits [.] With this disadvantage.",
"We propose to construct logic graphs from texts by using discourse relations as edges and elementary discourse units as nodes.",
"We obtain discourse features via graph neural networks to facilitate logical reasoning in QA models.",
"We show the effectiveness of using logic graph and feature enhancement by noticeable improvements on two datasets, ReClor and LogiQA.",
"Our intuition is to explicitly use discourse-based information to mimic the human reasoning process for logical reasoning questions.",
"The questions are in multiple choices format, which means given a triplet (context, question, answer options), models answer the question by selecting the correct answer option.",
"Our framework is shown in Figure",
"1. We first construct a discourse-based logic graph from the raw text.",
"Then we conduct reasoning via graph networks to learn and update the discourse-based features, which are incorporated with the contextual token embeddings for downstream answer prediction.",
"Our discourse-based logic graph is constructed via two steps: delimiting text into elementary discourse units (EDUs) and forming the graph using their relations as edges, as illustrated in Figure 1(1).",
"clause-like text spans delimited by discourse relations can be discourse units that reveal the rhetorical structure of texts (Mann and Thompson, 1988; Prasad et al., 2008).",
"We further observe that such discourse units are essential units in logical reasoning, such as being assumptions or opinions.",
"As the example shown in Figure 1, the while in the context indicates a comparison between the attributes of pure analog system and that of digital systems .",
"The because in the option provides evidence error cannot occur in the emission of digital signals to the claim digital systems are the best information systems .",
"We use PDTB 2.0 (Prasad et al., 2008) to help drawing discourse relations.",
"PDTB 2.0 contains discourse relations that are manually annotated on the 1 million Wall Street Journal (WSJ) corpus and are broadly characterized into Explicit and Im-plicit connectives.",
"The former apparently presents in sentences such as discourse adverbial instead or subordinating conjunction because , whereas the latter are inferred by annotators between successive pairs of text spans split by punctuation marks such as . or ;.",
"We simply take all the Explicit connectives as well as common punctuation marks to form our discourse delimiter library (details are given in Appendix A), with which we delimit the texts into EDUs.",
"For each data sample, we segment the context and options, ignoring the question since the question usually does not carry logical content.",
"Discourse Graph Construction We define the discourse-based graphs with EDUs as nodes, the Explicit connectives as well as the punctuation marks as two types of edges.",
"We assume that each connective or punctuation mark connects the EDUs before and after it.",
"For example, the option sentence in Figure 1 is delimited into two EDUs, EDU 7 = digital systems are the best information systems and EDU 8 = error cannot occur in the emission of digital signals by the connective r = because .",
"Then the returned triplets are ( EDU 7 , r, EDU 8 ) and ( EDU 8 , r, EDU 7 ) .",
"For each data sample with the context and multiple answer options, we separately construct graphs corresponding to each option, with EDUs in the same context and every single option.",
"The graph for the single option k is denoted by G k = ( V k , E k ) .",
"We present the Discourse-Aware Graph Network (DAGN) that uses the constructed graph to exploit discourse-based information for answering logical questions.",
"It consists of three main components: an EDU encoding module, a graph reasoning module, and an answer prediction module.",
"The former two are demonstrated in Figure 1(2), whereas the final component is in Figure 1(3).",
"EDU Encoding An EDU span embedding is obtained from its token embeddings.",
"There are two steps.",
"First, similar to previous works (Yu et al., 2020; Liu et al., 2020), we encode such input sequence <s> context </s> question || option </s> into contextual token embeddings with pre-trained language models, where <s> and </s> are the special tokens for RoBERTa (Liu et al., 2019) model, and || denotes concatenation.",
"Second, given the token embedding sequence { t 1 , t 2 , ..., t L } , the n -th EDU embedding is obtained by e n = (cid:80) l S n t l , where S n is the set of token indices belonging to n -th EDU.",
"Graph Reasoning After EDU encoding, DAGN performs reasoning over the discourse graph.",
"Inspired by previous graph-based models (Ran et al., 2019; Chen et al., 2020a), we also learn graph node representations to obtain higher-level features.",
"However, we consider different graph construction and encoding.",
"Specifically, let G k = ( V k , E k ) denote a graph corresponding to the k -th option in answer choices.",
"For each node v i V , the node embedding v i is initialized with the corresponding EDU embedding e i .",
"N i = { j | ( v j , v i ) E k } indicates the neighbors of node v i .",
"W r ji is the adjacency matrix for one of the two edge types, where r E indicates graph edges corresponding to the explicit connectives, and r I indicates graph edges corresponding to punctuation marks.",
"The model first calculates weight i for each node with a linear transformation and a sigmoid function i = ( W ( v i ) + b ) , then conducts message propagation with the weights: v i = 1 |N i | ( (cid:88) j N i j W r ji v j ) , r ji { r E , r I } (1) where v i is the message representation of node v i .",
"j and v j are the weight and the node embedding of v j respectively.",
"After the message propagation, the node representations are updated with the initial node embeddings and the message representations by v (cid:48) i = ReLU ( W u v i + v i + b u ) , (2) where W u and b u are weight and bias respectively.",
"The updated node representations v (cid:48) i will be used to enhance the contextual token embedding via summation in corresponding positions.",
"Thus t (cid:48) l = t l + v (cid:48) n , where l S n and S n is the corresponding token indices set for n -th EDU.",
"Answer Prediction The probabilities of options are obtained by feeding the discourse-enhanced token embeddings into the answer prediction module.",
"The model is end-to-end trained using cross entropy loss.",
"Specifically, the embedding sequence first goes through a layer normalization (Ba et al., 2016), then a bidirectional GRU (Cho et al., 2014).",
"The output embeddings are then added to the input ones as the residual structure (He et al., 2016).",
"We finally obtain the encoded sequence after another layer normalization on the added embeddings.",
"We then merge the high-level discourse features and the low-level token features.",
"Specifically, the variant-length encoded context sequence, question-and-option sequence are pooled via weighted summation wherein the weights are softmax results of Methods Dev Test Test-E Test-H BERT-Large 53.80 49.80 72.00 32.30 XLNet-Large 62.00 56.00 75.70 40.50 RoBERTa-Large 62.60 55.60 75.50 40.00 DAGN 65.20 58.20 76.14 44.11 DAGN (Aug) 65.80 58.30 75.91 44.46 * The results are taken from the ReClor paper.",
"* DAGN ranks the 1st on the public ReClor leaderboard 1 until 17th Nov., 2020 before submitting it to NAACL.",
"Until now, we find that several better results appeared in the leaderboard and they are not opened.",
"a linear transformation of the sequence, resulting in single feature vectors separately.",
"We concatenate them with <s> embedding from the backbone pre-trained model, and feed the new vector into a two-layer perceptron with a GELU activation (Hendrycks and Gimpel, 2016) to get the output features for classification.",
"We evaluate the performance of DAGN on two logical reasoning datasets, ReClor (Yu et al., 2020) and LogiQA (Liu et al., 2020), and conduct ablation study on graph construction and graph network.",
"The implementation details are shown in Appendix B. 3.1 Datasets ReClor contains 6,138 questions modified from standardized tests such as GMAT and LSAT, which are split into train / dev / test sets with 4,638 / 500 / 1,000 samples respectively.",
"The training set and the development set are available.",
"The test set is blind and hold-out, and split into an EASY subset and a HARD subset according to the performance of BERT-base model (Devlin et al., 2019).",
"The test results are obtained by submitting the test predictions to the leaderboard.",
"LogiQA consists of 8,678 questions that are collected from National Civil Servants Examinations of China and manually translated into English by professionals.",
"The dataset is randomly split into train / dev / test sets with 7,376 / 651 / 651 samples respectively.",
"Both datasets contain multiple logical reasoning types.",
"results are shown in Tables 1 and",
"2. Since there is no public method for both datasets, we compare DAGN with the baseline Methods Dev Test BERT-Large 34.10 31.03 RoBERTa-Large 35.02 35.33 DAGN 35.48 38.71 DAGN (Aug) 36.87 39.32 Table 2: Experimental results (accuracy %) of DAGN compared with baseline models on LogiQA dataset.",
"models.",
"As for DAGN, we fine-tune RoBERTa-Large as the backbone.",
"DAGN (Aug) is a variant that augments the graph features.",
"DAGN reaches 58.20% of test accuracy on ReClor.",
"DAGN (Aug) reaches 58.30%, therein 75.91% on EASY subset, and 44.46% on HARD subset.",
"Compared with RoBERTa-Large, the improvement on the HARD subset is remarkably 4.46%.",
"This indicates that the incorporated discourse-based information supplements the shortcoming of the baseline model, and that the discourse features are beneficial for such logical reasoning.",
"Besides, DAGN and DAGN (Aug) also outperform the baseline models on LogiQA, especially showing 4.01% improvement over RoBERTa-Large on the test set.",
"We conduct ablation study on graph construction details as well as the graph reasoning module.",
"The results are reported in Table",
"3. Varied Graph Nodes We first use clauses or sentences in substitution for EDUs as graph nodes.",
"For clause nodes, we simply remove Explicit connectives during discourse unit delimitation.",
"So that the texts are just delimited by punctuation marks.",
"For sentence nodes, we further reduce the delimiter library to solely period (.).",
"Using the modified graphs with clause nodes or coarser sentence nodes, the accuracy of DAGN drops to 64.40%.",
"This indicates that clause or sentence nodes carry 1 https://bit.ly/2UOQfaS less discourse information and act poorly as logical reasoning units.",
"Varied Graph Edges We make two changes of the edges: (1) modifying the edge type, (2) modifying the edge linking.",
"For edge type, all edges are regarded as a single type.",
"For edge linking, we ignore discourse relations and connect every pair of nodes, turning the graph into fully-connected.",
"The resulting accuracies drop to 64.80% and 61.60% respectively.",
"It is proved that in the graph we built, edges link EDUs in reasonable manners, which properly indicates the logical relations.",
"Ablation on Graph Reasoning We remove the graph module from DAGN and give a comparison.",
"This model solely contains an extra prediction module than the baseline.",
"The performance on ReClor dev set is between the baseline model and DAGN.",
"Therefore, despite the prediction module benefits the accuracy, the lack of graph reasoning leads to the absence of discourse features and degenerates the performance.",
"It demonstrates the necessity of discourse-based structure in logical reasoning.",
"Recent datasets for reading comprehension tend to be more complicated and require models' capability of reasoning.",
"For instance, HotpotQA (Yang et al., 2018), WikiHop (Welbl et al., 2018), Open-BookQA (Mihaylov et al., 2018), and MultiRC (Khashabi et al., 2018) require the models to have multi-hop reasoning.",
"DROP (Dua et al., 2019) and MA-TACO (Zhou et al., 2019) need the models to have numerical reasoning.",
"WIQA (Tandon et al., 2019) and CosmosQA (Huang et al., 2019) require causal reasoning that the models can understand the counterfactual hypothesis or find out the cause-effect relationships in events.",
"However, the logical reasoning datasets (Yu et al., 2020; Liu et al., 2020) require the models to have the logical reasoning capability of uncovering the inner logic of texts.",
"Deep neural networks are used for reasoning-driven RC.",
"Evidence-based methods (Madaan et al., 2020; Huang et al., 2020; Rajagopal et al., 2020) generate explainable evidence from a given context as the backup of reasoning.",
"Graph-based methods (Qiu et al., 2019; De Cao et al., 2019; Cao et al., 2019; Ran et al., 2019; Chen et al., 2020b; Xu et al., 2020b; Zhang et al., 2020) explicitly model the reasoning process with constructed graphs, then learn and update features through message passing based on graphs.",
"There are also other methods such as neuro-symbolic models (Saha et al., 2021) and adversarial training (Pereira et al., 2020).",
"Our paper uses a graph-based model.",
"However, for uncovering logical relations, graph nodes and edges are customized with discourse information.",
"Discourse information provides a high-level understanding of texts and hence is beneficial for many of the natural language tasks, for instance, text summarization (Cohan et al., 2018; Joty et al., 2019; Xu et al., 2020a; Feng et al., 2020), neural machine translation (Voita et al., 2018), and coherent text generation (Wang et al., 2020; Bosselut et al., 2018).",
"There are also discourse-based applications for reading comprehension.",
"DISCERN (Gao et al., 2020) segments texts into EDUs and learns interactive EDU features.",
"Mihaylov and Frank (2019) provide additional discourse-based annotations and encodes them with discourse-aware self-attention models.",
"Unlike previous works, DAGN first uses discourse relations as graph edges connecting EDUs for texts, then learns the discourse features via message passing with graph neural networks.",
"In this paper, we introduce a Discourse-Aware Graph Network (DAGN) to addressing logical reasoning QA tasks.",
"We first treat elementary discourse units (EDUs) that are split by discourse relations as basic reasoning units.",
"We then build discourse-based logic graphs with EDUs as nodes and discourse relations as edges.",
"DAGN then learns the discourse-based features and enhances them with contextual token embeddings.",
"DAGN reaches competitive performances on two recent logical reasoning datasets ReClor and LogiQA.",
"The authors would like to thank Wenge Liu, Jianheng Tang, Guanlin Li and Wei Wang for their support and useful discussions.",
"This work was supported in part by National Natural Science Foundation of China (NSFC) under Grant No.U19A2073 and No.61976233, Guangdong Province Basic and Applied Basic Research (Regional Joint Fund-Key) Grant No.2019B1515120039, Shenzhen Basic Research Project (Project No. JCYJ20190807154211365), Zhijiang Lab's Open Fund (No. 2020AA3AB14) and CSIG Young Fellow Support Fund."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"abstain",
"result",
"objective",
"result",
"result",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"We demonstrate that transformers obtain impressive performance even when some of the layers are randomly initialized and never updated.",
"Inspired by old and well-established ideas in machine learning, we explore a variety of non-linear reservoir layers interspersed with regular transformer layers, and show improvements in wall-clock compute time until convergence, as well as overall performance, on various machine translation and (masked) language modelling tasks.",
"Transformers (Vaswani et al., 2017) have dominated natural language processing (NLP) in recent years, from large scale machine translation (Ott et al., 2018) to pre-trained (masked) language modeling (Devlin et al., 2018; Radford et al., 2018), and are becoming more popular in other fields as well, from reinforcement learning (Vinyals et al., 2019) to speech recognition (Baevski et al., 2019) and computer vision (Carion et al., 2020).",
"Their success is enabled in part by ever increasing computational demands, which has naturally led to an increased interest in improving their efficiency.",
"Scalability gains in transformers could facilitate bigger, deeper networks with longer contexts (Kitaev et al., 2020; Wang et al., 2020; Beltagy et al., 2020; Kaplan et al., 2020; Tay et al., 2020b).",
"Conversely, improved efficiency could reduce environmental costs (Strubell et al., 2019) and hopefully help democratize the technology.",
"In this work, we explore a simple question: if some layers of the transformer are kept frozen i.e., never updated after random initialization can we match the performance of fully learned transformers, while being more efficient?",
"Surprisingly, the answer is resoundingly yes; and what is more, we find that freezing layers may actually improve performance.",
"Beyond desirable efficiency gains, random layers are interesting for several additional reasons.",
"Fixed randomly initialized networks (Gallicchio and Scardapane, 2020) converge to Gaussian processes in the limit of infinite width (Daniely et al., 2016), have intriguing interpretations in metric learning (Rosenfeld and Tsotsos, 2019; Giryes et al., 2016), and have been shown to provide excellent priors either for subsequent learning (Ulyanov et al., 2018) or pruning (Frankle and Carbin, 2018).",
"Fixed layers allow for efficient low-cost hardware implementations (Schrauwen et al., 2007) and can be characterized using only a random number generator and its seed.",
"This could facilitate distributed training and enables highly efficient deployment to edge devices, since it only requires transmission of a single number.",
"The strong performance of networks with fixed layers also sheds new light on the inner workings of BERT (Devlin et al., 2018), and layer-wise interpretations of such models (Rogers et al., 2020; Tenney et al., 2019).",
"It appears that not all layers are created equal",
"(Zhang et al., 2019)",
"is true to such an extent that some layers can simply remain random and fixed.",
"Random projections have a long history in machine learning.",
"By Cover's theorem",
"(Cover, 1965), any high-dimensional non-linear transformation is more likely to be linearly separable than its lower-or-equal-dimensional input space.",
"By Johnson-Lindenstrauss",
"(Johnson and Lindenstrauss, 1984), random projections distort Euclidean distances very little under mild assumptions, which is useful e.g. for dimensionality reduction and random indexing",
"(Sahlgren, 2005).",
"Fixed random layers in neural networks pre-date deep learning by far",
"(Gamba et al., 1961; Baum, 1988).",
"Indeed, random kernel methods have long been influential in machine learning",
"(Rahimi and Recht, 2008, 2009).",
"One way to think of such layers is as reser-voirs",
"(Lukosevicius and Jaeger, 2009), where a highly non-linear high-dimensional black box representation is provided to a lightweight readout network, as in echo state networks",
"(Jaeger, 2003)",
"and liquid state machines",
"(Maass et al., 2002).",
"The benefit of such an approach is that the reservoir has fixed parameters and is computationally efficient, as it can be pre-computed and does not",
"(necessar-ily)",
"require backpropagation.",
"In NLP, Wieting and Kiela",
"(2019)",
"showed that random sentence encoders present a strong baseline for text classification, with subsequent work showing applications in a variety of tasks from summarization to machine translation",
"(Enguehard et al., 2019; Garg et al., 2020; Pilault et al., 2020).",
"To our knowledge, this work is the first to examine this phenomenon in transformers, and the first to recursively alternate reservoirs with subsequent transformer layers acting as readout functions.",
"We introduce reservoir transformers, wherein fixed random reservoir layers are interspersed with regular updateable transformer layers.",
"The goal of this work is to put our understanding of transformer models on a more solid footing by providing empirical evidence of their capabilities even when some of their parameters are fixed.",
"Our contributions are as follows: We introduce a area under the convergence curve metric for measuring performance-efficiency trade-offs, and show that replacing regular transformer layers with reservoir layers leads to improvements.",
"We show that the addition of reservoir layers leads to improved test set generalization on a variety of tasks in a variety of settings.",
"We show that pre-trained masked language modelling architectures like BERT and RoBERTa",
"(Liu et al., 2019)",
"can benefit from having some of their layers frozen, both during pre-training as well as when fine-tuning on downstream tasks.",
"We experiment with different types of reservoir layers, including convolutional and recurrent neural network-based ones.",
"approximating upstream gradients using an approach we call backskipping , which can reduce the training compute further without sacrificing performance.",
"This paper is based on a very simple idea.",
"Neural networks are trained via backpropagation, which involves consecutive steps of matrix addition and multiplication, i.e., t +1 t J t ; J t = J L n L n L n 1 L 0 x for some objective J , parameterization and learning rate , with the gradient computed via the chain rule, where L i is the i -th layer of the neural network and x is the input.",
"Let L = Transformer",
"( X )",
"be a single layer in a Transformer network",
"(Vaswani et al., 2017), i.e., H = MultiHeadSelfAttn",
"Now, during every backward pass, we compute the Jacobian for parameters L at layer L , which are used to update the parameters of L , Lt , as well as to compute the next layer's Jacobian, thus back-propagating the gradients.",
"In this work however, for some of the layers, we still backprop-agate through them to compute gradients for earlier layers, but we never apply the parameter update .",
"As a result, these layers stay fixed at their initialization, saving computational resources.",
"Naturally, never updating some of the parameters is computationally more efficient, as some matrix addition operations can be skipped in the backward pass, but why is this not detrimental to the performance of the network?",
"In the early days of neural networks, the bottom layers were often kept fixed as associa-tors",
"(Block, 1962), or what",
"(Minsky and Papert, 2017)",
"called the Gamba perceptron",
"(Gamba et al., 1961; Borsellino and Gamba, 1961).",
"Fixed random networks",
"(Baum, 1988; Schmidt et al., 1992; Pao et al., 1994)",
"have been explored from many angles, including as random kitchen sink kernel machines",
"(Rahimi and Recht, 2008, 2009), ex-treme learning machines",
"(Huang et al., 2006)",
"and reservoir computing",
"(Jaeger, 2003; Maass et al., 2002; Lukosevicius and Jaeger, 2009).",
"In reservoir computing, input data are represented through fixed random high-dimensional non-linear representations, called reservoirs, which are followed by a regular",
"(often but not necessarily linear)",
"readout network to make the final classification decision.",
"The theoretical justification for these approaches lies in two well-known results in machine learning: Cover's theorem (Cover, 1965) on the separability of patterns states that high-dimensional non-linear transformations are more likely to be linearly separable; and the Johnson-Lindenstrauss lemma (Johnson and Lindenstrauss, 1984) shows that (most) random projections distort Euclidean distances very little.",
"Practically, random layers can be seen as a cheap way to increase network depth.",
"There are interesting advantages to this approach.",
"Fixed layers are known to have particularly low-cost hardware requirements and can be easily implemented on high-bandwidth FPGAs with low power consumption (Hadaeghi et al., 2017; Tanaka et al., 2019), or on optical devices (Hicke et al., 2013).",
"This might yield interesting possibilities for training in a distributed fashion across multiple devices, as well as for neuromorphic hardware (Neftci et al., 2017).",
"This approach also facilitates lower-latency deployment of neural networks to edge devices, since weights can be shared simply by sending the seed number, assuming the random number generator is known on both ends.",
"This work explores inserting random non-linear transformations, or what we call reservoir layers, into transformer networks.",
"Specifically, we experiment with a variety of reservoir layers: Transformer Reservoir: The standard transformer layer as described above, but with all parameters fixed after initialization, including the self-attention module.",
"FFN Reservoir: A transformer-style fixed feed-forward layer without any self-attention, i.e., FFN(LayerNorm ( Previous layer )) + Previous layer.",
"BiGRU Reservoir: A fixed bidirectional Gated Recurrent Unit (Cho et al., 2014) layer, which is closer in spirit to previous work on reservoir computing, most of which builds on recurrent neural network architectures.",
"CNN Reservoir: A fixed Convolutional Neural Network (LeCun et al., 1998) layer, specifically light dynamical convolution layers (Wu et al., 2019), which are known to be competitive with transformers in sequence-to-sequence tasks.",
"We find that all these approaches work well, to a certain extent.",
"For clarity, we focus primarily on the first two reservoir layers, but include a broader comparison in Appendix A. In each case, contrary to traditional reservoir computing, our reservoir layers are interspersed throughout a regular transformer network, or what we call a reservoir transformer.",
"Since random projections are not learned and might introduce noise, subsequent normal transformer readout layers might be able to benefit from additional depth while allowing us to recover from any adverse effects of randomness.",
"For example, previous work has shown that ResNets, with all of their parameters fixed except for the scale and shift parameters of batch normalization, can still achieve high performance, simply by scaling and shifting random features (Frankle et al., 2020).",
"Adding some form of noise to the parameters is also known to help convergence and generalization (Jim et al., 1995, 1996; Gulcehre et al., 2016; Noh et al., 2017).",
"We evaluate the proposed approach on a variety of well-known tasks in natural language processing, namely: machine translation, language modelling and masked language model pre-training.",
"We set out to do this work with the main objective of examining any potential efficiency gains, i.e. the relationship between compute time and task performance.",
"This is closely related to efforts in Green AI, which are concerned with the trade-offs between compute, data, and performance (Schwartz et al., 2019).",
"We propose to measure this trade-off via the area under the convergence curve (AUCC): similarly to how the area under the receiver operating characteristic (Bradley, 1997, AUC-ROC) measures a clas-sifier's performance independent of the classification threshold, AUCC measures a model's performance independent of the specific compute bud-get.",
"where f is the network and g is the evaluation metric, measured until convergence time T , which is the maximum convergence time of all models included in the comparison.",
"Note that time here is wall-clock time, not iterations.",
"By convergence, we mean that validation performance has stopped improving, and hence the convergence curve whose area we measure plots the desired metric over time.",
"Runs are averaged over multiple seeds and reported with standard deviation.",
"We normalize raw AUCC scores by their maximum to ensure a more interpretable [0 1] range.",
"One potential downside of this approach is that the AUCC metric could lead to higher scores for a model that converges quickly but to ultimately worse performance, if measured in a small window.",
"This can be solved by making sure that T is set sufficiently high.",
"We include the raw validation curves in the appendix to demonstrate that the chosen window sizes are sufficient and the results are not a influenced by this limitation.",
"In addition, we report the number of trainable parameters and the wall-clock training time until maximum performance (plus 95% and 99% convergence results in the appendix).",
"Finally, we show test set generalization in each experiment.",
"Overall, this gives us a wide set of axes along which to examine models.",
"We evaluate on IWSLT de-en (Cettolo et al., 2015) and WMT en-de (Bojar et al., 2014) for machine translation; enwiki8 (LLC, 2009) for language modelling; and experiment with RoBERTa (Liu et al., 2019) in our pretraining experiments.",
"For IWSLT, we follow the pre-processing steps in Edunov et al. (2018).",
"The train/val/test split is 129k/10k/6.8k sentences.",
"For WMT, we follow pre-process as in Ott et al. (2018), with 4.5M/16.5k/3k sentences in train/val/test.",
"For enwiki8, we follow the pre-processing steps in Dai et al. (2019).",
"The train/val/test split is 1M/54k/56k sentences.",
"For RoBERTa pretraining, we follow the pre-processing steps in Liu et al. (2019).",
"We use 8 Volta V100 GPUs for WMT and enwik8, 32 V100 GPUs for RoBERTa and a single V100 for IWSLT.",
"The hyperparameters for IWSLT14 and WMT16 were set to the best-performing values from Ott et al. (2018) and Kasai et al. (2020) respectively.",
"The enwik8 experiment settings followed Bachlechner et al. (2020) and the RoBERTa experiments followed Liu et al. (2019).",
"All the experiments in this paper were run with 3 random seeds and the mean and standard deviation are reported.",
"For the relatively small IWSLT, the T value in the AUCC metric was set to 4 hours.",
"For the larger WMT, we set it to 20 hours.",
"For enwiki8, it was 30 hours; and for the RoBERTa pre-training experiments, it was set to 60 hours.",
"The projection weights in random layers were initialized using orthogonal initialization (Saxe et al., 2013), since random orthogonal projections should ideally be maximally information-preserving, and which was found to work well empirically for initializing fixed random representations in previous work (Wieting and Kiela, 2019).",
"Biases and layer norm parameters were initialized using their respective PyTorch defaults (based on Xavier init; Glorot and Bengio, 2010).",
"We intersperse reservoir layers in alternating fashion starting from the middle.",
"Specifically, we alternate one reservoir layer with one transformer layer, and place the alternating block in the middle.",
"For example: a 7 -layer encoder LLLLLLL in which we replace three layers with reservoirs becomes LRLRLRL, and with two becomes LLRLRLL.",
"See Appendix C for a study comparing this strategy to alternative approaches (e.g., freezing in the bottom, middle or top).",
"In what follows, we first show our main result, on a variety of tasks: reservoir transformers mostly have better AUCC metrics; less training time per epoch; less convergence time until the best validation performance is achieved; and even improved test set generalization metrics.",
"As a strong baseline method, we compare to LayerDrop (Fan et al., 2019).",
"LayerDrop can also be seen as a method that dynamically bypasses parts of the computation during Transformer training in an attempt to improve efficiency, and making it a strong comparison to examine our methods.",
"Then, we examine whether we can minimize the expectation over the gradients of upstream layers in the network such that we do not at all have to pass gradients through the reservoir layers, skipping their backward pass.",
"Machine translation (MT) is one of the core tasks of NLP.",
"We demonstrate on two well-known MT datasets, IWSLT'14 German-English and WMT'16 English-German, that reservoir transformers obtain a better AUCC.",
"For the raw validation plots over time that were used to calculate the AUCC, please refer to Appendix F. Following Kasai et al. (2020), the architecture of the network is an N-layer reservoir transformer encoder, followed by a regular shallow oneor two-layer decoder.",
"This design choice has been shown to lead to very good speed and efficiency trade-offs, and serves as a good baseline for our experiments.",
"Moreover, shallow decoders make it easier to decide where to place reservoir layers (in the encoder) and makes it more straightforward to identify where performance gains come from.",
"Figure 1 shows the results for IWSLT (left) and WMT (middle).",
"On the y-axis we show validation AUCC for the BLEU metric; on the x-axis we show the number of updatable layers in the encoder.",
"The performance of a regular transformer encoder with 6 layers and a reservoir transformer encoder with 6 layers plus N additional reservoir layers are plotted for the same x-axis value to show the total number of updated layers.",
"Plots for the total number of layers (updatable plus not-updatable, so essentially shifted versions of the plots) are shown in Appendix E. WMT is much larger and requires a much deeper encoder, as illustrated by the fact that a certain minimum depth is required for reservoir transformers to achieve a comparable validation AUCC.",
"At test time, reservoir transformers outperform regular transformers for almost all encoder depths.",
"The FFN Reservoir seems to work best in both cases, which is surprising because it does not have any self-attention component at all.",
"This finding shows that self-attention, or the mechanism to summarize context information, should be learned if present.",
"Once the context features have been gathered, a random projection via a fixed FFN module appears to be beneficial.",
"Table 1 and 2 show the time it took to achieve the maximum validation BLEU score and how that relates to the regular transformer, demonstrating that reservoir transformers consistently converge faster in terms of wall-clock time.",
"We save up to 22% convergence wall-clock time using reservoir transformers as much with the same number of updateable layers.",
"We save as much as 27% time until convergence a 24 layer model on WMT, as shown in Table 2.",
"One other noticeable point is that we can see that the T Reservoir achieves similar performance to LayerDrop on IWSLT and WMT in terms of wall-clock per epoch and wall-clock time to the best performance.",
"However, on both tasks, FFN Reservoir performs much better than LayerDrop in terms of efficiency per epoch Model # Layers Frozen Max BLEU Train time Ratio # Params Train Time each until max (in hours) Trainable (Total) epoch (in seconds) Transformer 6 0 34.52 0.07 2.548 0.06 1 26.8M 122.73 1.16 8 0 34.59 0.11 2.557 0.05 1 31.1M 142.28 1.87 10 0 34.56 0.05 3.173 0.04 1 35.3M 161.66 1.54 12 0 34.29 0.12 3.521 0.09 1 39.5M 172.45 1.98 T Reservoir 6 2 34.37 0.12 2.422 0.03 0.95 22.6M (26.8M) 120.59 1.32 8 2 34.80 0.07 2.450 0.06 0.96 26.8M (31.1M) 134.49 1.76 10 2 34.70 0.03 2.831 0.05 0.89 31.1M (35.3M) 144.42 1.98 12 2 34.78 0.04 3.476 0.04 0.98 35.3M (39.5M) 159.43 1.67 FFN Reservoir 6 2 34.43 0.15 2.120 0.04 0.83 22.6M (25.8M) 107.71 1.73 8 2 34.56 0.16 2.203 0.06 0.86 26.8M (29.1M) 120.07 1.65 10 2 34.66 0.02 2.493 0.05 0.79 31.1M (33.3M) 130.11 1.43 12 2 34.76 0.03 3.241 0.04 0.92 35.3M (37.5M) 156.32 1.87 LayerDrop 6 2 34.59 0.15 2.364 0.08 0.92 22.6M (26.8M) 119.30 1.36 8 2 34.58 0.16 2.554 0.05 0.99 26.8M (31.1M) 138.62 1.44 10 2 34.57 0.07 3.404 0.06 1.07 31.1M (35.3M) 140.88 1.62 12 2 33.65 0.24 3.251 0.04 0.92 35.3M (39.5M) 160.85 1.49 Table 1: Wall-clock time (averaged over multiple runs) saved for IWSLT for different model types and encoder depths.",
"and achieves better/similar performance in less time in each case.",
"As a point of reference, a half hour gain on IWSLT would translate to a gain of several days in the training of bigger transformer models like GPT-3 (Brown et al., 2020).",
"We observe that reservoir transformers consistently perform better than, or are competitive to, regular transformers, both in terms of validation BLEU AUCC as well as test time BLEU, for all examined encoder depths.",
"To examine whether the same findings hold for other tasks, we evaluate on the enwiki8 (LLC,",
"2009) language modelling task.",
"We examine the BPC (bits per character) rate for a variety of network depths (since the task is language modelling, these layers are in the decoder).",
"The results show that except for the 64 -layer regular transformer, which appears to be particularly optimal for this task, we obtain consistently better BPC for all depths.",
"We observe similar trends during test time.",
"We train RoBERTa (Liu et al., 2019) models from scratch at a variety of depths, both in the normal and reservoir setting.",
"We find that these networks show minor differences in their best perplexity 4 6 8 10 12 14 16 # Updatable Decoder Layers 91 92 93 94 95 96 v a li d a cc u r a c y TransformerT ReservoirFFN Reservoir Transformer (frozen finetuned) 4 6 8 10 12 14 16 # Updatable Decoder Layers 78 80 82 84 86 v a li d a cc u r a c y TransformerT ReservoirFFN Reservoir Transformer (frozen finetuned) Figure 2: Downstream RoBERTa performance on SST-2 (left) and MultiNLI-matched (right).",
"and similar AUCC perplexity (see Appendix D).",
"We then examine the performance of these models when fine-tuned on downstream tasks, specifically the well known SST-2 (Socher et al., 2013) and MultiNLI-matched (Williams et al., 2017) tasks.",
"When fine-tuning the reservoir models, we keep the reservoir layers fixed (also fine-tuning them did not work very well, see Appendix D).",
"Figure 2 shows the results of fine-tuning.",
"We observe that the reservoir transformer outperforms normal RoBERTa at all depths in both tasks.",
"At lower depth, the improvements are substantial.",
"As a sanity check, we also experiment with freezing some of the layers in a regular pre-trained RoBERTa model during fine-tuning only (Trans-former frozen finetuned in the Figure) and show that this helps a little but is still outperformed by the reservoir transformer.",
"These findings suggest that we can train a RoBERTa model without updating all of the layers, achieving similar perplexity at a similar computational cost, but with better downstream performance.",
"This strategy could prove to be benefi-cial in a wide variety of pre-training scenarios.",
"We follow Jawahar et al. (2019) and investigate what the frozen layers in the Reservoir Transformer have actually learned (while being frozen) as measured by probing tasks, reported in Table 4.",
"The set of tasks comprises one surface task, three syntactic tasks, and five semantic tasks.",
"From the table, we can see that generally probing performance is quite similar between Transformer and the T Reservoir model.",
"We also noticed that the representations collected after the reservoir layer (3, 5, 7, 9) in the T Reservoir actually have significantly better performance over the regular Transformer representations across all the probing tasks.",
"Related to our findings, Voita and Titov (2020) show that the wholly-randomly-initialized model representations can still have reasonable probing accuracy if they are contextual-ized, though the accuracy is strictly worse than a trained one.",
"These findings raise interesting repercussions for the study of BERTology, as it clearly shows that even completely random and frozen layers can represent linguistic phenomena.",
"With the reservoir transformers as described above, we obtain better efficiency by skipping the gradient application matrix addition step in some of the layers (i.e., updating the weights).",
"One step further would be to investigate skipping the entire backward pass for reservoirs altogether, which would save us from having to do the much more expensive matrix multiplication for these layers that is required for the propagation of gradients through a regular layer.",
"We report on preliminary experiments where in the backward pass we replace the gradients for the layer L i going into the reservoir L i +1 with a noisy estimate (Jaderberg et al., 2017; Czarnecki et al., 2017).",
"Promisingly, Oktay et al. (2020) recently asked why spend resources on exact gradients when we're going to use stochastic optimiza-tion? and show that we can do randomized autodifferentiation quite successfully.",
"Here, rather than minimizing the actual gradients L i Li , we minimize their expectation and train via continuous-action REINFORCE (Williams, 1992).",
"That is, L i becomes a policy a : s where we sample actions a N ( , 1) .",
"We train to minimize the gradient prediction loss via MSE, i.e., 1 n (cid:80) ni =0 ( R i V i ( a )) 2 , and the REINFORCE loss E a [log( a ) ( R V ( a ))] , where the value network V acts as the baseline.",
"R is defined as the mean of the gradients of the top layer L i +2 , with the sign flipped.",
"Thus, simply put, we train to minimize the expectation of the true gradients at the layer directly following the reservoir.",
"We employ an annealing scheme where we first train the value network and propagate the true gradients during warmup.",
"Afterwards, we anneal the probability of backskipping instead of doing a true backward pass (multiplying the probability by 0 . 99 every iteration until we only backskip).",
"We experimented with setting R to the negation of the total loss but found the mean upstream gradient reward to work better.",
"We call this approach backskipping .",
"As shown in Table 3, the backskip reservoir approach leads to a higher maximum BLEU score than the regular transformer, with a much higher AUCC and much lower training time.",
"The encoder depth is 8 with 2 frozen.",
"Appendix G shows the raw validation BLEU curves over time.",
"We observe that this approach helps especially during the earlier stages of training.",
"This finding opens up intriguing possibilities for having parts of neural networks be completely frozen both in the for-ward as well as in the backward pass, while still contributing to the overall model computation.",
"The computational cost is heavily reduced given that we completely bypass the expensive back-propagation computation in the reservoir layers.",
"Backskipping is shown to be a promising approach to further reduce computational costs, and would be even more efficient from a hardware perspective since the circuitry for such layers (which do not need to propagate gradients) can be hardwired.",
"Recent work has shown that modern NLP models are able to function with different numbers of layers for different examples (Elbayad et al., 2019; Fan et al., 2019; He et al., 2021); that different layers specialize for different purposes (Zhang et al., 2019); that layers can be compressed (Li et al., 2020; Zhu et al., 2019; Shen et al., 2020; Sun et al., 2020); and, that layers can be reordered (Press et al., 2019).",
"There is a growing body of work in efficient self-attention networks (Tay et al., 2020b), such as linear attention (Wang et al., 2020), on how to process long context information (Beltagy et al., 2020; Ainslie et al., 2020) and on approximations to make transformers more scalable (Kitaev et al., 2020; Katharopoulos et al., 2020).",
"BigBIRD (Zaheer et al., 2020) provides random keys as additional inputs to its attention mechanism.",
"Locality sensitive hashing (LSH) as employed e.g. in Reformer (Kitaev et al., 2020) utilizes a fixed random projection.",
"Random Feature Attention (Peng et al., 2021) uses random feature methods to approximate the softmax function.",
"Performer (Choromanski et al., 2020) computes the transformer's multi-head attention weights as a fixed orthogonal random projection.",
"Closely related to this work, Tay et al. (2020a) showed that randomized alignment matrices in their Synthe-sizer architecture are sufficient for many NLP tasks.",
"While these works focus on random attention, we show that entire layers can be random and fixed.",
"We also show that entire layers can be replaced by fixed random projections that do not have any attention whatsoever.",
"Beyond transformers, random features have been extensively explored.",
"Examples of this include FreezeOut (Brock et al., 2017), deep reservoir computing networks (Scardapane and Wang, 2017; Gallicchio and Micheli, 2017), as well as applications in domains as varied as text classification (Conneau et al., 2017; Zhang and Bowman, 2018; Wieting and Kiela, 2019) or music classification (Pons and Serra, 2019).",
"It is well known that randomly initialized networks can display impressive performance on their own (Ulyanov et al., 2018; Rosenfeld and Tsotsos, 2019; Ramanujan et al., 2020; Voita and Titov, 2020), which underlies, for example, the recently popularized lottery ticket hypothesis (Frankle and Carbin, 2018; Zhou et al., 2019).",
"We know that learning deep over-parameterized networks appears to help in general (Li and Liang, 2018; Du et al., 2019).",
"Our method constitutes a way to add both depth and parameters to transformer networks without much computational cost.",
"This work demonstrated that state-of-the-art transformer architectures can be trained without updating all of the layers.",
"This complements a long history in machine learning of harnessing the power of random features.",
"We use the area under the convergence curve (AUCC) metric to demonstrate that on a variety of tasks, and in a variety of settings, reservoir transformers achieve better performance-efficiency trade-offs.",
"We show that such reservoir transformers show better convergence rates and test-set generalization.",
"We demonstrated that the backward pass can be skipped altogether, opening up exciting vanues for future research.",
"Future work includes further investigating hybrid networks and backskipping strategies, as well as utilizing pruning.",
"We thank Eric Wallace, Zhewei Yao, Kevin Lin, Zhiqing Sun, Zhuohan Li, Angela Fan, Shaojie Bai, and anonymous reviewers for their comments and suggestions.",
"SS and KK were supported by grants from Samsung, Facebook, and the Berkeley Deep Drive Consortium."
] | [
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"result",
"objective",
"objective",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"abstain",
"objective",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"We propose a grounded dialogue state encoder which addresses a foundational issue on how to integrate visual grounding with dialogue system components.",
"As a test-bed, we focus on the GuessWhat?!",
"game, a two-player game where the goal is to identify an object in a complex visual scene by asking a sequence of yes/no questions.",
"Our visually-grounded encoder leverages synergies between guessing and asking questions, as it is trained jointly using multi-task learning.",
"We further enrich our model via a cooperative learning regime.",
"We show that the introduction of both the joint architecture and cooperative learning lead to accuracy improvements over the baseline system.",
"We compare our approach to an alternative system which extends the baseline with reinforcement learning.",
"Our in-depth analysis shows that the linguistic skills of the two models differ dramatically, despite approaching comparable performance levels.",
"This points at the importance of analyzing the linguistic output of competing systems beyond numeric comparison solely based on task success.",
"1 1 Introduction Over the last few decades, substantial progress has been made in developing dialogue systems that address the abilities that need to be put to work during conversations: Understanding and generating natural language, planning actions, and tracking the information exchanged by the dialogue participants.",
"The latter is particularly critical since, for communication to be effective, participants need to represent the state of the dialogue and the common ground established through the conversation (Stalnaker, 1978; Lewis, 1979; Clark, 1996).",
"1 Equal contribution by R. Shekhar and A. Venkatesh.",
"this study, we develop a dialogue agent that builds a representation of the context and the dialogue state by integrating information from both the visual and linguistic modalities.",
"We take the Guess-What?!",
"game (de Vries et al., 2017) as our testbed, a two-player game where a Questioner faces the task of identifying a target object in a visual scene by asking a series of yes/no questions to an Oracle.",
"We model the agent in the Questioner's role.",
"To model the Questioner, previous work relies on two independent models to learn to ask questions and to guess the target object, each equipped with its own encoder (de Vries et al., 2017; Strub et al., 2017; Zhu et al., 2017; Lee et al., 2017; Shekhar et al., 2018; Zhang et al., 2018).",
"We propose an end-to-end architecture with a single visually-grounded dialogue state encoder (cf. Figure 1).",
"Our system is trained jointly in a supervised learning setup, extended with a cooperative learning (CL) regime: By letting the model play the game with self-generated dialogues, the components of the Questioner agent learn to better perform the overall Questioner's task in a cooperative manner.",
"Das et al. (2017b) have explored the use of CL to train two visual dialogue agents that receive joint rewards when they play a game successfully.",
"To our knowledge, ours is the first approach where cooperative learning is applied to the internal components of a grounded conversational agent.",
"Our cooperative learning regime can be seen as an interesting alternative to reinforcement learning (RL)which was first applied to GuessWhat?!",
"by Strub et al. (2017)because it is entirely differentiable and computationally less expensive to train than RL.",
"Little is known on how this learning approach compares to RL not only regarding task success, but also in terms of the quality of the linguistic output, a gap we seek to fill in this paper.",
"In particular, our contributions are: 2 The introduction of a single visually-grounded dialogue state encoder jointly trained with the guesser and question generator modules to address a foundational question of how to integrate visual grounding with dialogue system components; this yields up to 9% improvement on task success.",
"The effectiveness of cooperative learning, which yields an additional increase of 8.7% accuracy, while being easier to train than RL.",
"A first in-depth study to compare cooperative learning to a state-of-the-art RL system.",
"Our study shows that the linguistic skills of the models differ dramatically, despite approaching comparable task success levels.",
"This underlines the importance of linguistic analysis to complement solely numeric evaluation.",
"Task-oriented dialogue systems The conventional architecture of task-oriented dialogue systems includes a pipeline of components, and the task of tracking the dialogue state is typically modelled as a partially-observable Markov decision process (Williams et al., 2013; Young et al., 2013; Kim et al., 2014) that operates on a symbolic dialogue state consisting of predefined variables.",
"The use of symbolic representations to characterise the state of the dialogue has some advantages (e.g., ease of interfacing with knowledge bases), but it has also some key disadvantages: the variables to be tracked have to be defined in advance and the system needs to be trained on data annotated with explicit state configurations.",
"Given these limitations, there has been a shift towards neural end-to-end systems that learn their own representations.",
"Early works focus on non-goal-oriented chatbots (Vinyals and Le, 2015; Sor-doni et al., 2015; Serban et al., 2016; Li et al., 2016a,b).",
"Bordes et al. (2017) propose a memory network to adapt an end-to-end system to task-oriented dialogue.",
"Recent works combine conventional symbolic with neural approaches (Williams et al., 2017; Zhao and Eskenazi, 2016; Rastogi et al., 2018), but all focus on language-only dialogue.",
"We propose a visually grounded task-oriented end-to-end dialogue system which, while maintaining the crucial aspect of the interaction of the various modules at play in a conversational agent, grounds them through vision.",
"Visual dialogue agents In recent years, researchers in computer vision have proposed tasks that combine visual processing with dialogue interaction.",
"Pertinent datasets created by Das et al. (2017a) and de Vries et al. (2017) include VisDial and GuessWhat?!",
", respectively, where two participants ask and answer questions about an image.",
"While impressive progress has been made in combining vision and language, current models make simplifications regarding the integration of these two modalities and their exploitation for task-related actions.",
"For example, the models proposed for VisDial by Das et al. (2017a) concern an image guessing game where one agent does not see the target image (thus, no multimodal understanding) and is required to imagine' it by asking questions.",
"The other agent does see the image, but only responds to questions without the need to perform additional actions.",
"In GuessWhat?!",
", the Questioner agent sees an image and asks questions to identify a target object in it.",
"The Questioner's role hence involves a complex interaction of vision, language, and guessing actions.",
"Most research to date has investigated approaches consisting of different models trained independently (de Vries et al., 2017; Strub et al., 2017; Zhu et al., 2017; Lee et al., 2017; Shekhar et al., 2018; Zhang et al., 2018).",
"We propose the first multimodal dialogue agent for the Guess-What?!",
"task where all components of the Questioner agent are integrated into a joint architecture that has at its core a visually-grounded dialogue state encoder (cf. Figure 1).",
"Reinforcement learning for visual dialogue agents was introduced by Das et al. (2017b) for VisDial and by Strub et al. (2017) for Guess-What?!",
".",
"Our joint architecture allows us to explore a simpler solution based on cooperative learning between the agent's internal modules (see Section 5 for details).",
"The GuessWhat?!",
"game (de Vries et al., 2017) is a simplified instance of a referential communication task where two players collaborate to identify a referenta setting used extensively in human-human collaborative dialogue (Clark and Wilkes-Gibbs, 1986; Yule, 1997; Zarrie et al., 2016).",
"The GuessWhat?!",
"dataset 3 was collected via Amazon Mechanical Turk by de Vries et al. (2017).",
"The task involves two human participants who see a real-world image, taken from the MS-COCO dataset (Lin et al., 2014).",
"One of the participants (the Oracle) is assigned a target object in the image and the other participant (the Questioner) has to guess it by asking Yes/No questions to the Oracle.",
"There are no time constraints to play the game.",
"Once the Questioner is ready to make a guess, the list of candidate objects is provided and the game is considered successful if the Questioner picks the target object.",
"The dataset consists of around 155k English dialogues about approximately 66k different images.",
"Dialogues contain on average 5.2 questions-answer pairs.",
"We focus on developing an agent who plays the role of the Questioner in GuessWhat?!",
".",
"As a baseline model (BL), we consider our own implementation of the best performing system put forward by de Vries et al. (2017).",
"It consists of two independent models: a Question Generator (QGen) and a Guesser.",
"For the sake of simplicity, QGen asks a fixed number of questions before the Guesser predicts the target object.",
"QGen is implemented as an Recurrent Neural Network (RNN) with a transition function handled with Long-Short-Term Memory (LSTM) (Hochre-iter and Schmidhuber, 1997), on which a probabilistic sequence model is built with a Softmax classifier.",
"At each time step in the dialogue, the model receives as input the raw image and the dialogue history and generates the next question 3 Dataset: https://guesswhat.ai/download .",
"one word at a time.",
"The image is encoded by extracting its VGG-16 features (Simonyan and Zisserman, 2014).",
"In our new joint architecture (described below in Section 4.2), we use ResNet152 (He et al., 2016) features instead of VGG, because they tend to yield better performance in image classification and are more efficient to compute.",
"For the baseline model it turns out that the original VGG-16 features lead to better performance (41.8% accuracy for VGG-16 vs. 37.3% with ResNet152 features).",
"While we use ResNet152 features in our models, we keep the original VGG-16 feature configuration as de Vries et al. (2017), which constitutes a stronger baseline.",
"The Guesser model exploits the annotations in the MS-COCO dataset (Lin et al., 2014) to represent candidate objects by their object category and their spatial coordinates.",
"This yields better performance than using raw image features in this case, as reported by de Vries et al. (2017).",
"The objects' categories and coordinates are passed through a Multi-Layer Perceptron (MLP) to get an embedding for each object.",
"The Guesser also takes as input the dialogue history processed by its own dedicated LSTM.",
"A dot product between the hidden state of the LSTM and each of the object embeddings returns a score for each candidate object.",
"The model playing the role of the Oracle is informed about the target object o target .",
"Like the Guesser, the Oracle does not have access to the raw image features.",
"It receives as input embeddings of the target object's category, its spatial coordinates, and the current question asked by the Questioner, encoded by a dedicated LSTM.",
"These three embeddings are concatenated and fed to an MLP that gives an answer (Yes or No).",
"In line with the baseline model, our Questioner agent includes two sub-modules, a QGen and a Guesser.",
"As in the baseline, the Guesser guesses after a fixed number of questions, which is a parameter tuned on the validation set.",
"Our agent architecture differs from the baseline model by de Vries et al.: Rather than operating independently, the language generation and guessing modules are connected through a common grounded dialogue state encoder (GDSE) which combines linguistic and visual information as a prior for the two modules.",
"Given this representation, we will MLP <person> <car> softmax ( ) MLP MLP h t visually grounded dialogue state category and coordinates of candidate objects <bat> <sos> is the it batter ResNet 152 QGen Guesser Figure 2: Question Generation and Guesser modules.",
"As illustrated in Figure 1, the encoder receives as input representations of the visual and linguistic context.",
"The visual representation consists of the second to last layer of ResNet152 trained on ImageNet.",
"The linguistic representation is obtained by an LSTM (LSTM e ) which processes each new question-answer pair in the dialogue.",
"At each question-answer QA t , the last hidden state of LSTM e is concatenated with the image features I , passed through a linear layer and a tanh activation to result in the final layer h t : h t = tanh ( W [ LSTM e ( qa 1: t 1 ); I ]) (1) where [ ; ] represents concatenation, I R 2048 1 , LSTM e R 1024 1 and W R 512 3072 (identical to prior work except for tuning the ResNet-specific parameters).",
"We refer to this final layer as the dialogue state , which is given as input to both QGen and Guesser.",
"As illustrated in Figure 2, our QGen and Guesser modules are like the corresponding modules by de Vries et al. (2017), except for the crucial fact that they receive as input the same grounded dialogue state representation.",
"QGen employs an LSTM (LSTM q ) to generate the token sequence for each question conditioned on h t , which is used to initialise the hidden state of LSTM q .",
"As input at every time step, QGen receives a dense embedding of the previously generated token w i 1 and the image features I : p ( w i ) = p ( w i | w 1 , ..., w i 1 , h t , I ) (2) We optimise QGen by minimising the Negative Log Likelihood (NLL) of the human dialogues and use the Adam optimiser (Kingma and Ba, 2015): LQ = (cid:88) i log p ( w i ) (3) Thus, in our architecture the LSTM q of QGen in combination with the LSTM e of the Encoder form a sequence-to-sequence model (Sutskever et al., 2014), conditioned on the visual and linguistic context in contrast to the baseline model, where question generation is performed by a single LSTM on its own.",
"The Guesser consists of an MLP which is evaluated for each candidate object in the image.",
"It takes the dense embedding of the category and the spatial information of the object to establish a representation r j R 512 1 for each object.",
"A score is calculated for each object by performing the dot product between the dialogue state h t and the object representation.",
"Finally, a softmax over the scores results in a probability distribution over the candidate objects: p ( o j ) = e h Tt r j (cid:80) j e h Tt r j (4) We pick the object with the highest probability and the game is successful if o guess = o target , where o guess = arg max j p ( o j ) .",
"As with QGen, we optimise the Guesser by minimising the NLL and again make use of Adam: LG = log p ( o target ) (5) The resulting architecture is fully differentiable.",
"In addition, the GDSE agent faces a multi-task optimisation problem: While the QGen optimises LQ and the Guesser optimises LG , the parameters of the Encoder ( W , LSTM e ) are optimised via both LQ and LG .",
"Hence, both tasks faced by the Questioner agent contribute to the optimisation of the dialogue state h t , and thus to a more effective encoding of the input context.",
"We first introduce the supervised learning approach used to train both BL and GDSE, then our cooperative learning regime, and finally the reinforcement learning approach we compare to.",
"In the baseline model, the QGen and the Guesser modules are trained autonomously with supervised learning (SL): QGen is trained to replicate",
"human questions and, independently, the Guesser is trained to predict the target object.",
"Our new architecture with a common dialogue state encoder allows us to formulate these two tasks as a multitask problem, with two different losses (Eq. 3 and 5 in Section 4.2).",
"These two tasks are not equally difficult: While the Guesser has to learn the probability distribution of the set of possible objects in the image, QGen needs to fit the distribution of natural language words.",
"Thus, QGen has a harder task to optimize and requires more parameters and training iterations.",
"We address this issue by making the learning schedule task-dependent.",
"We call this setup modulo-n training, where n indicates after how many epochs of QGen training the Guesser is updated together with QGen.",
"Using the validation set, we experimented with n from 5 to 15 and found that updating the Guesser every 7 epochs worked best.",
"With this optimal configuration, we then train GDSE for 100 epochs (batch size of 1024, Adam, learning rate of 0.0001) and select the Questioner module best performing on the validation set (henceforth, GDSE-SL or simply SL).",
"Once the model has been trained with SL, new training data can be generated by letting the agent play new games.",
"Given an image from the training set used in the SL phase, we generate a new training instance by randomly sampling a target object from all objects in the image.",
"We then let our Questioner agent and the Oracle play the game with that object as target, and further train the common encoder using the generated dialogues by backpropagating the error with gradient descent through the Guesser.",
"After training the Guesser and the encoder with generated dialogues, QGen needs to readapt' to the newly arranged encoder parameters.",
"To achieve this, we re-train QGen on the human data with SL, but using the new encoder states.",
"Also here, the error is backpropagated with gradient descent through the common encoder.",
"Regarding modulo-n , in this case QGen is updated at every n th epoch, while the Guesser is updated at all other epochs; we experimented with n from 3-7 and set it to the optimal value of",
"5. The GDSE previously trained with SL is further trained with this cooperative learning regime for 100 epochs (batch size of 256, Adam, learning rate of 0.0001), and we select the Questioner module performing best on the validation set (henceforth, GDSE-CL or simply CL).",
"Strub et al. (2017) proposed the first extension of BL (de Vries et al., 2017) with deep reinforcement learning (RL).",
"They present an architecture for end-to-end training using an RL policy.",
"First, the Oracle, Guesser, and QGen models are trained independently using supervised learning.",
"Then, QGen is further trained using a policy gradient.",
"We use the publicly available code and pre-trained model based on Sampling (Strub et al., 2017), which resulted in the closest performance to what was reported by the authors.",
"4 This is the RL model we use throughout the rest of the paper.",
"We use the same train (70%), validation (15%), and test (15%) splits as de Vries et al. (2017).",
"The test set contains new images not seen during training.",
"We use two experimental setups for the number of questions to be asked by the question generator, motivated by prior work: 5 questions (5Q) following de Vries et al. (2017), and 8 questions (8Q) as in Strub et al. (2017).",
"As noted in Section 3, on average, there are 5.2 questions per dialogue in the GuessWhat?!",
"data set.",
"For evaluation, we report task success in terms of accuracy (Strub et al., 2017).",
"To neutralize the effect of random sampling in training CL, we trained the model 3 times.",
"RL is tested 3 times with sampling.",
"We report means and standard deviation (for some tables these are provided in the supplementary material; see footnote 2).",
"Grounded joint architecture First of all, our visually-grounded dialogue state encoder is effective.",
"GDSE-SL outperforms the baseline by de Vries et al. (2017) significantly in both setups (absolute accuracy improvements of 6.6% 4 Their result of 53.3% accuracy published in Strub et al. (2017) is obsolete, as stated on their GitHub page ( https: //github.com/GuessWhatGame/guesswhat ) where they report 56.5% for sampling and 58.4% for greedy search.",
"By running their code, we could only replicate their results with sampling, obtaining 56%, while greedy and beam search resulted in similar or worse performance.",
"Our analysis showed that greedy and beam search have the additional disadvantage of learning a smaller vocabulary.",
"and 9%).",
"To evaluate the impact of the multitask learning aspect, we did an ablation study and used the encoder-decoder architecture to train the QGen and Guesser modules independently.",
"With such a decoupled training we obtain lower results: 44% and 43.7% accuracy for 5Q and 8Q, respectively.",
"Hence, the multi-task component brings an increase of up to 6% over the baseline.",
"5 Cooperative learning and RL The introduction of the cooperative learning approach results in a clear improvement over GDSE-SL: +8.7% (8Q: from 49.7 to 58.4) and +5.9% (with 5Q).",
"Despite its simplicity, our GDSE-CL model achieves a task success rate which is comparable to RL: In the 8Q setup, GDSE-CL reaches an average accuracy of 58.4 versus 56.3 for RL, giving CL a slight edge in this setup (+2.1%), while in the 5Q setup RL is slightly better (+2.5%).",
"Overall, the accuracy of the CL and RL models is close.",
"The interesting question is how the linguistic skills and strategy of these two models differ, to which we turn in the next section.",
"We compared to Strub et al. (2017), but RL has also been put forward by Zhang et al. (2018), who report 60.7% accuracy (5Q).",
"This result is close to our highest GDSE-CL result (60.8 0.51, when optimized for 10Q).",
"6 Their RL system integrates several partial reward functions to increase coherence, which is an interesting aspect.",
"Yet their code is not publicly available.",
"We leave the comparison to Zhang et al. (2018) and adding RL to GDSE to future work.",
"5 While de Vries et al. (2017) originally report an accuracy of 46.8%, this result was later revised to 40.8%, as clarified on their GitHub page.",
"Our own implementation of the baseline system achieves an accuracy of 41.2%.",
"6 Since our aim is to compare to the best setup for BL (5Q) and RL (8Q), we do not report our results with 10Q in Table",
"In this section, we present a range of analyses that aim to shed light on the performance of the models.",
"They are carried out on the test set data using the 8Q setting, which yields better results than the 5Q setting for the GDSE models and RL.",
"Given that there is only a small difference in accuracy for the baseline with 5Q and 8Q, for comparability we analyse dialogues with 8Q also for BL.",
"We analyse the language produced by the Questioner agent with respect to three factors: (1) lexical diversity, measured as type/token ratio over all games, (2) question diversity, measured as the percentage of unique questions over all games, and (3) the number of games with questions repeated verbatim.",
"We compute these factors on the test set for the models and for the human data (H).",
"As shown in Table 2, the linguistic output of SL & CL is closer to the language used by humans: Our agent is able to produce a much richer and less repetitive output than both BL and RL.",
"In particular, it learns to use a more diverse vocabulary, generates more unique questions, and repeats questions within the same dialogue at a much lower rate than the baseline and RL: 93.5% of the games played by BL contain at least one verbatim question repetition, for RL this happens in 96.47% of the cases, whereas for SL and CL this is for only 55.8% and 52.19% of the games, respectively.",
"To further understand the variety of questions asked by the agents, we classify questions into different types.",
"We distinguish between questions that aim at getting the category of the target object ( ENTITY questions, e.g., is it a vehicle?' ) and questions about properties of the queried objects ( ATTRIBUTE questions, e.g., is it square?' or are Humans [ success",
"they",
"standing?' ).",
"Within ATTRIBUTE questions, we make a distinction between color, shape, size, texture, location, and action questions.",
"Within ENTITY questions, we distinguish questions whose focus is an object category or a super-category (see the supplementary material for example ques-tions).",
"The classification is done by manually extracting keywords for each question type from the human dialogues, and then applying an automatic heuristic that assigns a class to a question given the presence of the relevant keywords.",
"7 This procedure allows us to classify 91.41% of the questions asked by humans.",
"The coverage is higher for the questions asked by the models: 98.88% (BL), 94.72% (SL), 94.11% (CL) and 99.51 % (RL).",
"8 The statistics are shown in Table",
"3. We use Kullback-Leibler (KL) divergence to measure how the output of each model differs from the human distribution of fine-grained question classes.",
"The baseline's output has the highest degree of divergence: For instance, the BL model does never ask any SHAPE or TEXTURE questions, and hardly any SIZE questions.",
"The output of the RL model also differs substantially from the human dialogues: It asks a very large number of LOCATION questions (74.8% vs. 40% for humans).",
"Our model, in contrast, generates question types that resemble the human distribution more closely.",
"We also analyse the structure of the dialogues in terms of the sequences of question types asked.",
"As expected, both humans and models almost always start with an ENTITY question (around 97% for BL, SL and CL, 98.7% for RL, and 78.48% for hu-mans), in particular a SUPER-CATEGORY (around 70% for BL, SL and CL, 84% for RL, and 52.32% for humans).",
"In some cases, humans start by ask-7 A question may be tagged with several attribute classes if keywords of different types are present.",
"E.g., Is it the white one on the left? is classified as both COLOR and LOCATION .",
"8 In the supplementary material we provide details on the question classification procedure: the lists of keywords by class, the procedure used to obtain these lists, as well as the pseudo-code of the heuristics used to classify the questions.",
"ing questions directly about an attribute that may easily distinguish an object from others, while this is very uncommon for models.",
"Figure 3 shows an example: The human dialogue begins with an ATTRIBUTE question ( does it have cereal on it?' ), which in this case is not very effective and leads to a change in strategy at turn",
"4. The CL model starts by asking an OBJECT question ( is it a donut?' ) while the RL model begins with a more generic SUPER-CATEGORY question ( is it food?' ).",
"We check how the answer to a given question type affects the type of the follow-up question.",
"In principle, we expect to find that question types that are answered positively will be followed by more specific questions.",
"This is indeed what we observe in the human dialogues, as shown in Table",
"4. For example, when a SUPER-CATEGORY question is answered positively, humans follow up with an OBJECT or ATTRIBUTE question 89.56% of the time.",
"This trend is mirrored by all models.",
"Overall, the models also learn the strategy to move from an OBJECT to an ATTRIBUTE question when an OBJECT question receives a Yes answer.",
"The BL, SL, and CL models do this to a lesser extent than humans, while the RL model systematically",
"transitions to attributes (in 99.46% of cases), using mostly LOCATION questions as pointed out above.",
"For example (Figure 3), after receiving an affirma-tive answer to the OBJECT question is it a donut?' both CL and RL shift to a LOCATION question.",
"Once location is established, CL moves on to other attributes while RL keeps asking the same LOCATION question, which leads to failure.",
"Further illustrative examples are given in the supplementary material.",
"In order to better understand the effect of the cooperative learning regime, we trace the evolution of linguistic factors identified above over the CL epochs.",
"As illustrated in Figure 4",
"(a) and",
"(b), through the epochs the CL model learns to use a richer vocabulary and more diverse questions, moving away from the levels achieved by BL and RL, overpassing SL and moving toward humans.",
"The CL model progressively produces fewer repeated questions within a dialogue, improving over SL in the last few epochs, cf.",
"Figure 4",
"(c).",
"Finally,",
"(d) illustrates the effect of modulon training: As the model is trained on generated dialogues, its linguistic output drifts away from the human distribution of question types; every 5 th epoch QGen is trained via supervision, which brings the model's behaviour closer back to human linguistic style and helps decrease the drift.",
"We present a new visually-grounded joint Questioner agent for goal-oriented dialogue.",
"First, we show that our architecture archives 69% accuracy improvements over the GuessWhat?!",
"baseline system (de Vries et al., 2017).",
"This way, we address a foundational limitation of previous approaches that model guessing and questioning separately.",
"Second, our joint architecture allows us to propose a two-phase cooperative learning approach (CL), which further improves accuracy.",
"It results in our overall best model and reaches state-of-the-art results (cf. Section 6).",
"We compare CL to the system proposed by Strub et al. (2017) which extends the baseline with reinforcement learning (RL).",
"We find that the two approaches (CL and RL) achieve overall relatively similar task success rates.",
"However, evaluating on task success is only one side of the coin.",
"Finally and most importantly, we propose to pursue an in-depth analysis of the quality of the dialogues by visual conversational agents, which is an aspect often neglected in the literature.",
"We analyze the linguistic output of the two models across three factors (lexical diversity, question diversity, and repetitions) and find them to differ substantially.",
"The CL model uses a richer vocabulary and inventory of questions, and produces fewer repeated questions than RL.",
"In contrast, RL highly relies on asking location questions, which might be explained by a higher re-liance on spatial and object-type information explicitly given to the Guesser and Oracle models.",
"Limiting rewards to task success or other rewards not connected to the language proficiency does not stimulate the model to learn rich linguistic skills, since a reduced vocabulary and simple linguistic structures may be an efficient strategy to succeed at the game.",
"Overall, the presence of repeated questions remains an important weakness of all models, resulting in unnatural dialogues.",
"This shows that there is still a considerable gap to human-like conversational agents.",
"Looking beyond task success can provide a good basis for extensions of current architectures, e.g., Shekhar et al. (2018) add a decision-making component that decides when to stop asking questions which results in less repetitive and more human-like dialogues.",
"Our joint architecture could easily be extended with such a component.",
"The work carried out by the Amsterdam team was partially funded by the Netherlands Organisation for Scientific Research (NWO) under VIDI grant nr. 276-89-008, Asymmetry in Conversation .",
"We thank the University of Trento for generously funding a research visit by Barbara Plank to CIMeC that led to part of the work presented in this paper.",
"In addition, we kindly acknowledge the support of NVIDIA Corporation with the donation to the University of Trento of the GPUs used in our research."
] | [
"objective",
"method",
"abstain",
"method",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Recent work in natural language processing (NLP) has focused on ethical challenges such as understanding and mitigating bias in data and algorithms; identifying objectionable content like hate speech, stereotypes and offensive language; and building frameworks for better system design and data handling practices.",
"However, there has been little discussion about the ethical foundations that underlie these efforts.",
"In this work, we study one ethical theory, namely deontological ethics, from the perspective of NLP.",
"In particular, we focus on the generalization principle and the respect for autonomy through informed consent.",
"We provide four case studies to demonstrate how these principles can be used with NLP systems.",
"We also recommend directions to avoid the ethical issues in these systems.",
"The 21st century is witnessing a major shift in the way people interact with technology, and natural language processing (NLP) is playing a central role.",
"A plethora of NLP applications such as question-answering systems (Bouziane et al., 2015; Gillard et al., 2006; Yang et al., 2018) used in diverse fields like healthcare (Sarrouti and Ouatik El Alaoui, 2017; Zweigenbaum, 2009), education (Godea and Nielsen, 2018; Raamadhurai et al., 2019), privacy (Ravichander et al., 2019; Shvartzshanider et al., 2018); machine translation systems (Cherry et al., 2019; Barrault et al., 2019; Nakazawa et al., 2019; Liu, 2018), conversational agents (Pietquin et al., 2020; Serban et al., 2018; Liu et al., 2016), recommendation systems (Al-harthi and Inkpen, 2019; Greenquist et al., 2019) etc. are deployed and used by millions of users.",
"NLP systems have become pervasive in current human lifestyle by performing mundane tasks like setting reminders and alarms to complex tasks like authors contributed equally to this work.",
"replying to emails, booking tickets and recommending movies/restaurants.",
"This widespread use calls for an analysis of these systems from an ethical standpoint.",
"Despite all the advances in efficiency and operations of NLP systems, little literature exists which broadly addresses the ethical challenges of these technologies.",
"Ethical theories have been studied for millennia and should be leveraged in a principled way to address the questions we are facing in NLP today.",
"Instead, the topic of ethics within NLP has come to refer primarily to addressing bias in NLP systems; Blodgett et al. (2020) provides a critical survey of how bias is studied in NLP literature.",
"The survey finds that research on NLP systems conceptualize bias differently and that the techniques are not well tied with the relevant literature outside of NLP.",
"This creates a gap between NLP research and the study of ethics in philosophy which leaves a rich body of knowledge untapped.",
"Our work bridges this gap by illustrating how a philosophical theory of ethics can be applied to NLP research.",
"Ethics (or ethical theory), is a theoretical and applied branch of philosophy which studies what is good and right, especially as it pertains to how humans ought to behave in the most general sense (Fieser, 1995).",
"As NLP research qual-ifies as a human activity, it is within the purview of ethics.",
"In particular, we are using a prescriptive , rather than descriptive , theory of ethics; prescriptive theories define and recommend ethical behavior whereas descriptive theories merely report how people generally conceive of ethical behavior.",
"We select two ethical principles from the deontological tradition of ethics and focus on how these principles are relevant to research in NLP.",
"Namely we look at the generalization principle and respect for autonomy through informed consent (Johnson and Cureton, 2019; Kleinig, 2009).",
"We select de-onotology because it is reasonable, provides clear ethical rules and comports with the legal idea of the rule of law in the sense that these ethical rules bind all persons equally, rather than shifting standards to effect a certain outcome.",
"We find that there are two main ways in which ethical guidelines can be applied in NLP (or to any other area of technology): 1. An ethical guideline can aid in deciding what topics within a field merit attention; that is, it answers the question which tasks have important ethical implications?.",
"2. An ethical guideline can aid in determining how to address a problem; that is, it answers the question what factors and methods are preferable in ethically solving this problem?.",
"We primarily address (1) and briefly touch on (2) by presenting four case studies relevant to NLP.",
"In each case study we use an ethical principle to identify an area of research that could potentially conflict with it, and suggest NLP directions to mitigate it.",
"Although we have selected two principles from a deontological perspective, we are not intimating that these principles can address all ethical issues nor that deontological ethics is the only ethical framework in which our rules and case studies could function (6).",
"Instead, we present the following as a starting point for NLP researchers less familiar but interested in applicable ethical theory.",
"Our primary contributions are: Providing an overview of two deontological principles along with a discussion on their limitations with a special focus on NLP.",
"Illustrating four specific case studies of NLP systems which have ethical implications under these principles and providing a direction to alleviate these issues.",
"While there are a number of categories of prescriptive ethical theories, including deontology (Kant, 1785), consequentialism (e.g., utilitarianism) (Ben-tham, 1843), and virtue ethics (Aristotle, 350 B.C.E.), we are only addressing deontology.",
"We do not take a stance in this paper as to whether or not there exists an objectively correct ethical theory, but we offer a brief sketch of deontological ethics and our reasons for using it.",
"Deontology or deontological ethics refers to a family of ethical theories which hold that whether an act is ethically good or bad is determined by its adherence to ethical rules (Alexander and Moore, 2016).",
"These rules can be agent-focused duties (e.g., duty to care for one's children) or patient-focused rights (e.g., right to life).",
"Such rules can also be formulated in modal logic, allowing for more precise reasoning over sets of rules (Hooker and Kim, 2018).",
"Deontology stands in contrast to another popular framework of ethics: consequentialism.",
"Consequentialism holds the ultimate consequences of an action to be the deciding factor regardless of the nature of the actions taken to get there.",
"We can illustrate the difference between them by observing how each of them might condemn something like racially biased hiring in academia.",
"1 A deontologist might say that this practice is wrong because it violates the human right to equal treatment regardless of race.",
"A consequentialist on the other hand, would argue that this is wrong because its effect is stymieing academic creativity by reducing intellectual diversity.",
"We ultimately select the deontological framework in this work for the following reasons: 1. We find deontology to be convincing in its own right, namely, its ability to delineate robust duties and rights which protect the value of each and every person.",
"2. The universally applicable rules 2 of deontology provide a good basis for providing recommendations to researchers.",
"Since rights and duties (at their core) are not situation dependent, they are tractable to address in NLP applications.",
"3 3. The focus on rights and duties which apply to everyone equally fits well with the widespread legal concept of the rule of law which states that every person is subject to the same laws.",
"We appeal to the fact that problems should be analyzed with a systematic framework, and ethical",
"1 Note that we are presenting generic examples of deontological and consequentialist frameworks and that a variety of",
"nuanced theories in each category exist.",
"2 While determining rules which apply universally across all cultures is a difficult task, the existence of organizations, such as the United Nations, presuppose the achievability of identifying internationally applicable norms.",
"3 In contrast to (action-based) utilitarianism which mandates evaluating the full consequences of each action.",
"theories provide precisely these frameworks.",
"Research should not be based on preconceived notions of ethics which can be overly subjective and inconsistent.",
"To more rigorously determine what is right and wrong, we rely on ethical theories.",
"Card and Smith (2020) present an analysis of ethics in machine learning under a consequentialist framework.",
"This paper is a kindred spirit in that we both seek to make a philosophical theory of ethics concrete within machine learning and NLP, yet the methods of the paper are somewhat orthogonal.",
"Card and Smith (2020) provide a comprehensive overview of how the particular nature of consequentialist ethics is relevant to machine learning whereas we intend to provide tangible examples of how deontological ethical principles can identify ethically important areas of research.",
"Saltz et al. (2019); Bender et al. (2020) advocate for explicitly teaching ethical theory as a part of machine learning and NLP courses; the case studies in this paper would be a logical extension of the material presented in such a course.",
"NLP research on ethics has primarily focused on two directions: (1) exploring and understanding the impact of NLP on society, and (2) providing algorithmic solutions to ethical challenges.",
"Hovy and Spruit (2016) started the conversation about the potential social harms of NLP technology.",
"They discussed the concepts of exclusion, overgeneralization, bias confirmation, topic underand overexposure , and dual use from the perspective of NLP research.",
"A lot of work followed this discussion and made contributions towards ethical frameworks and design practices (Lei-dner and Plachouras, 2017; Parra Escartn et al., 2017; Prabhumoye et al., 2019; Schnoebelen, 2017; Schmaltz, 2018), data handling practices (Lewis et al., 2017; Mieskes, 2017) and specific domains like education (Mayfield et al., 2019; Loukina et al., 2019), healthcare (uster et al., 2017; Benton et al., 2017) and conversational agents (Cercas Curry and Rieser, 2018; Henderson et al., 2018).",
"Our paper does not focus on a particular domain but calls for attention towards various NLP systems and what ethical issues may arise in them.",
"Most of the work providing algorithmic solutions has been focused on bias in NLP systems.",
"Shah et al. (2020); Tatman (2017); Larson (2017) aim to study the social impact of bias in NLP systems and propose frameworks to understand it better.",
"A large body of work (Bolukbasi et al., 2016; Sun et al., 2019; Zhao et al., 2019, 2017; Sap et al., 2019; Hanna et al., 2020; Davidson et al., 2019) directs its efforts to mitigate bias in data, representations, and algorithms.",
"Blodgett et al. (2020) provide an extensive survey of this work and point out the weaknesses in the research design.",
"It makes recommendations of grounding work analyzing bias in NLP systems in the relevant literature outside of NLP, understanding why system behaviors can be harmful and to whom, and engaging in a conversation with the communities that are affected by the NLP systems.",
"Although issues with bias are certainly within the scope of the principles we present, we do not specifically write on bias because it has already received a large amount of attention.",
"There is a variety of specific deontological theories which range from having one central, abstract principle (Kant, 1785) to having a handful of concrete principles (Ross, 1930).",
"Rather than comprehensively addressing one theory, we select two rules, one abstract and one concrete, which can fit within a variety of deontological theories.",
"The generalization principle is an abstract, broad-reaching rule which comes from traditional Kantian ethics.",
"The respect for autonomy is concrete and commonly seen in politics and bioethics.",
"The generalization principle has its roots in Immanuel Kant's theory of deontological ethics (Kant, 1785).",
"4 The generalization principle states the following (Johnson and Cureton, 2019).",
"An action A taken for reasons R is ethical if and only if a world where all people perform A for reasons R is conceivable.",
"It is clearer when phrased in the negative.",
"The main utility of the generalization principle is that it can identify unethical actions that may seem acceptable in isolated occurrences but lead to problems when habitually taken by everyone.",
"For example, let us take making and breaking a legal contract (the action) whenever it is convenient (the reasons); implicit in the reasons for making a 4 It is also referred to as the universal law formulation of Kant's categorical imperative.",
"contract is that the other person believes we will follow through (Johnson and Cureton, 2019).",
"If we universalize this and conceive of a world where everyone makes contracts which they have no intent of keeping, no one would believe in the sincerity of a contract.",
"Hence, no one would make contracts in the first place since they are never adhered to.",
"This is the sort of contradiction by which the generalization principle condemns an action and the rationale behind it.",
"Another example is plagiarism of research papers in conference submissions.",
"Let us assume that a top tier conference did not check for plagiarism because they trust in the honesty of the researchers.",
"In this case, a researcher G decides to take an action A of plagiarising a paper due to the following set of reasons R : (1) G believes that they would not get caught because the conference does not use plagiarism detection software, (2) publishing this paper in the said conference would boost G 's profile by adding 100 citations, and (3) this would increase G 's chances of getting a job.",
"Plagiarism in this case would be ungeneralizable and hence unethical.",
"If all researchers who want to boost their profile were to submit plagiarised papers, then every researcher's profile would be boosted by 100 citations, and 100 citations would lose their value.",
"Hence, this would not increase G 's chances of getting a job, contradicting R 3 .",
"Thus, G 's reasons for plagiarism are inconsistent with the assumption that everyone with same reasons plagiarises.",
"Respect for autonomy generally addresses the right of a person to make decisions which directly pertain to themselves.",
"One of the primary manifestations of this is the concept of informed consent , whereby a person A proposes to act in some way X on person B which would normally infringe on B 's right to self-govern.",
"Specifically, we use the formulation of informed consent given by Pugh (2020) based on Kleinig (2009): 1. B must be sufficiently informed with regards to the relevant facts concerning X to understand what X is (and what consequences are likely to occur as a result of X ).",
"2. On the basis of this information, B herself makes the decision to allow A to do X .",
"right to refuse treatment (or certain kinds of treatment) by medical personnel.",
"In routine medical treatments this informed consent might be implicit, since one would not go to the doctor in the first place if they did not want to be treated at all, but in risky or experimental medical procedures, explaining the risks and benefits and obtaining explicit consent would be mandatory.",
"In this case, the patient's autonomy specifically refers to opting out of medical procedures, and informed consent is a concrete method by which to respect this autonomy.",
"A non-medical example of respect for autonomy and informed consent would be hiring an interpreter A for a language that the user B does not speak.",
"Under normal circumstances, B 's autonomy dictates that she and only she can speak for herself.",
"But if she is trying to communicate in a language she does not speak, she might consent to A serving as an ad hoc representative for what she would like to say.",
"In a high-stakes situation, there might be a formal contract of how A is to act, but in informal circumstances, she would implicitly trust that A translates what she says faithfully ( X ).",
"In these informal settings, A should provide necessary information to B before deviating from the expected behaviour X (e.g., if the meaning of a sentence is impossible to translate).",
"Implicit consent is a double-edged sword: it is necessary to navigate normal social situations, but it can undermine the respect for autonomy in scenarios when (1) the person in question is not explicitly informed and (2) reasonable expectations do not match reality.",
"We apply the generalization principle in 4.1 and 4.2 and respect for autonomy in 4.3 and 4.4.",
"Question-answering (QA) systems have made a huge progress with the recent advances in large pre-trained language models (Devlin et al., 2019; Radford et al., 2019; Guu et al., 2020).",
"Despite these improvements, it is difficult to know how the model reached its prediction.",
"In fact, it has been shown that models often obtain high performance by leveraging statistical irregularities rather than language understanding (Poliak et al., 2018; Geva et al., 2019; Gururangan et al., 2018).",
"The result is that when a QA system is wrong it is difficult for an end user to determine why it was wrong.",
"Presumably, the user would not know the answer",
"to the question in the first place, and so it would be difficult to determine even that the QA system was wrong.",
"The act of widely deploying such a QA system is in conflict with the generalization principle.",
"For example, a QA system G is unsure of its prediction A and does not know how it arrived at the answer.",
"Instead of notifying the user about its inability to reach the prediction, G decides to return the prediction A due to the following reasons R : (1) G believes that the user does not know the answer and hence (2) G believes that the user will trust its answer and not ask for reasons for giving the prediction.",
"If all QA systems operate like this, users will lose trust in QA systems being able to answer their questions reliably and no longer use them.",
"This contradicts assumption R 2 , violating the generalization principle.",
"This issue goes deeper than a matter of the (in)accuracy of the answer; explainability is still important for a near-perfect QA system.",
"First, the source of an answer could be fallible (even if the content was interpreted correctly), in which case it is important to be able to point which sources were used.",
"Second, answers can often be ambiguous, so a user might naturally ask for clarification to be sure of what the answer means.",
"Finally, it is natural for humans to build trust when working with a system, and explainability is an important step in this process.",
"Attention weights have been widely used for explaining QA predictions.",
"Attention weights learnt by neural models denote the words or phrases in a sentence that the model focuses on.",
"Hence, words or phrases with high attention weights are considered as explanations to the QA predictions.",
"But these weights do not reliably correlate with model predictions, making them unsuitable for explainability (Pruthi et al., 2020; Serrano and Smith, 2019; Jain and Wallace, 2019).",
"Recently, generating natural language explanations (Rajani et al., 2019; Latcinnik and Berant, 2020) for predictions has gained traction.",
"These methods train a language generation model to generate explanations for the QA predictions.",
"Using a black-box model for text generation, though, pushes the same problem further down the line.",
"Part of the issue with both of the aforementioned methods is that the reasoning for the answer is determined after the answer has been generated (i.e., reasoning should inform the answer, not vice-versa).",
"The way forward: A method which reaches the prediction through reasoning would be more in line with the generalization principle.",
"For example, reaching the prediction through traversal of a knowledge graph.",
"This has been used in scenarios where a knowledge base exists (Han et al., 2020; Jansen et al., 2018) for a QA system as well as in dynamic graph generation to reach the prediction (Liu et al., 2020; Rajagopal et al., 2020; Bosselut and Choi, 2019).",
"In these methods, the reasoning is part of the process to generate the final answer, which is more suitable in failing gracefully and building user trust.",
"Social media platforms have made the world smaller.",
"At the same time, the world has seen a surge in hate-speech, offensive language, stereotype and bias on online platforms.",
"These online platforms have traffic in the millions of textual comments, posts, blogs, etc. every day.",
"Identifying such objectionable content by reading each item is intractable.",
"Hence, building an NLP system which can read textual data and flag potential objectionable content is necessary.",
"These systems can reduce the burden on humans by reducing the number of posts that need to be seen by human eyes.",
"The pivotal role NLP systems play in flagging such content makes the ethical considerations important.",
"Fig. 1a shows a microaggressive comment and its scores by a state-of-the-art (1) hate speech detection system and (2) sentiment analysis system.",
"Since these systems rely on surface level words or phrases to detect such (overt) comments, they tend to miss subtle (covert) objectionable content (Bre-itfeller et al., 2019).",
"If such NLP systems are used universally, then the users of hate speech will discover ways to phrase the same meaning with different words (as illustrated above).",
"Thus, the NLP content flagging system will not be able to detect objectionable content, and there will be no point in deploying it.",
"This contradiction suggests that NLP systems must not make their predictions based only on superficial language features but instead seek to understand the intent and consequences of the text presented to them.",
"Hence, they should generate reasons for flagging posts to facilitate the decision making of the human judges and also to provide evidence about the accuracy of their predictions.",
"The way forward: An example of objectionable content is microaggression (Fig. 1).",
"According to Merriam-Webster, microaggression is defined as a comment or action that subtly and often unconsciously expresses a prejudiced attitude toward a member of a marginalized group (e.g. racial mi-nority).",
"Microaggressions are linguistically subtle which makes them difficult to analyze and quantify automatically.",
"Understanding and explaining why an arguably innocuous statement is potentially prejudiced requires reasoning about conversational and commonsense implications with respect to the underlying intent, offensiveness, and power differentials between different social groups.",
"Breitfeller et al. (2019) provide a new typology to better understand the nature of microaggressions and their impact on different social groups.",
"Fig. 1b presents such a comment and how we would like the NLP systems to annotate such content.",
"Sap et al. (2020) perform the task of generating the consequences and implications of comments which is a step towards judging content based on its meaning and not simply which words it happens to use.",
"Although such an aim does not automatically solve the problem, attempting to uncover the deeper meaning does not result in an inconsistency or violation of the generalization principle.",
"Machine Translation (MT) systems have reduced language barriers in this era of globalization.",
"Neural machine translation systems especially have made huge progress and are being deployed by large companies to interact with humans.",
"But facilitating human-to-human interaction requires more than just simple text-to-text translation, it requires the system to interpret the meaning of the language.",
"This requires a greater sensitivity to style, intent, and context on the part of MT systems.",
"When an MT system acts as an interpreter for a user, it is essentially speaking for the user when conveying the translated message.",
"Speaking for one's self is within one's sphere of autonomy, but by using the MT system the user has implicitly consented to it representing the user.",
"That being said, the operating assumption for most users is that the MT system will simply translate the source language into the target language without changing the meaning.",
"Yet on occasion, differences or ambiguities between languages require either contextual knowledge or further clarification on what is being said.",
"If the MT system encounters such ambiguities, the user must be informed of such occurrences so that she can consent to the message which the system ultimately conveys.",
"Moreover, the user must also be informed of the failure cases in the MT system rather than it producing an entirely incorrect translation.",
"For example, when translating from English to Japanese, there is a mismatch in the granularity of titles or honorifics used to address people.",
"In English, Ms. and Mr. is an appropriate way to address a schoolteacher who does not hold a doctorate.",
"On the other hand, in Japanese it would be disrespectful to use the more common -san honorific (the rough equivalent of Ms. or Mr.) in place of -sensei which refers specifically to teachers or mentors and shows them a special level of respect.",
"If the MT system cannot reasonably infer how to resolve the ambiguity in such situations, the English speaker should be informed about it.",
"The English speaker must be notified that such an ambiguity needs to be resolved because there is a risk of offending the Japanese speaker otherwise.",
"between literality and fluency in certain situations like the translation of idioms.",
"Idioms are especially problematic when considering autonomy because there are multiple strategies to translating them which are not only difficult in and of themselves to execute, but deciding which one to use requires the interpreter (i.e., MT system) to understand the intent of the user.",
"Baker (1992, Ch. 3) identifies five different methods for translating idioms: 1. Using an idiom of similar meaning and form; directly translating the idiom achieves the same effect 2. Using an idiom of similar meaning but dissimilar form; swapping out an equivalent idiom with a different literal meaning 3. Translation by paraphrase; simply explaining the idiom plainly 4. Translation by omission 5. Translation by compensation; for example, omitting idioms in certain locations and adding them in elsewhere to maintain the same overall tone For example, in casual conversation, an MT system may prefer strategies 1, 2, and 5 to maintain a friendly tone, but in a high-stake business negotiation, it would be more prudent to play it safe with strategy 3. An MT system must be sensitive to the user's intent since choosing an inappropriate translation strategy could violate her autonomy.",
"While para-linguistic conduct may fill the gaps for in person interaction, if the interaction is happening only via the textual modality, then there is minimal room for such conduct.",
"The users in this case may not be aware of the flaws of the MT system representing the,.",
"A recent study (Heinisch and Luicky, 2019) shows that 45% of the participants reported that they expect MT output, in professional and private contexts, to be useable immediately without any further editing.",
"However, post-study, this expectation was not fulfilled.",
"The work further shows that the expectation of the type of errors is also different from the errors in the outputs of the MT system.",
"For example: only 6% of the participants expect that the output would be useless, but after reading the output, 28% thought that the output was useless.",
"The participants in this study had different levels of experience with MT systems (frequent vs. rare users) and used MT systems for different functions (private, professional).",
"The way forward: Mima et al. (1997) drive the early discussion on using information such as context, social role, domain and situation in MT systems.",
"DiMarco and Hirst (1990) advocate for style and intent in translation systems.",
"A study by Hovy et al. (2020) finds that commercial translation systems make users sound older and more male than the original demographics of the users.",
"Recent work (Niu and Carpuat, 2020; Sennrich et al., 2016) has given specific focus to controlling formality and politeness in translation systems.",
"There is also work directed towards personalizing MT systems (Rabinovich et al., 2017; Michel and Neu-big, 2018; Mirkin et al., 2015; Mirkin and Meunier, 2015) while preserving author attributes as well as controlling structural information like voice (Ya-magishi et al., 2016).",
"This is a step in the right direction, but we argue that to respect autonomy, translation systems should also obtain explicit informed consent from the user when necessary.",
"Further research is required in the direction of informing the users about the failure cases of the MT system.",
"For example, in case of ambiguity, textual interfaces can provide multiple suggestions to the addresser along with the implications of using each variant.",
"The user can select the option which best fits their goal.",
"In speech interfaces, the MT system can ask a follow up question to the addresser of the system in case of ambiguity or it can add cautionary phrases to the addressee informing them about the ambiguity.",
"Alternatively, if the system thinks that the input sentence is ambiguous and cannot be translated with reasonable confidence then it can say I am unable to translate the sentence in its current form. Can you please rephrase it?.",
"An example scenario where such clarification might be needed is: while translating from English to Hindi if the sentence refers to one's aunt, the MT system should ask a follow up question about maternal vs paternal aunt since they have two different words in Hindi language.",
"We can find a nuanced application of the autonomy principle in the way that dialogue systems, especially smart toys or virtual assistants like Alexa and Google Home, interact with children.",
"One expression of a parent's autonomy 5 is generally in deciding whom their child may interact 5 This is technically heteronomy , but this examples comports with the spirit of respect for autonomy .",
"with.",
"For example a parent would permit interaction with a teacher but not a random stranger.",
"In the case of a parent purchasing and using a virtual assistant at home, they are implicitly consenting to their children interacting with the assistant, and the issue arises from the fact that they may not be informed as to what this interaction entails.",
"To an adult, a virtual assistant or dialogue-capable toy may seem like just another computer, but a 7-year-old child might view it as more capable of feelings and giving answersa step in the direction of assigning personhood (Druga et al., 2017).",
"Furthermore, while humans have had thousands of years to learn about human-human interaction, we have only had a half-century to learn about the effects of human-machine (and thus, child-machine) interaction (Reeves and Nass, 1996).",
"We suggest two key areas which are important for dialogue system researchers: (1) they must answer the question of what unique social role do dialogue systems fulfillthat is, in what respects can they be regarded as human-like vs. machinelike, and (2) the dialogue systems must have some way of modeling the social dynamics and cues of the interlocutor to fulfill the social role properly.",
"The way forward: There is a fair amount of research on the social aspects of human-computer dialogue both in general and specifically with regards to children (Druga et al., 2017; Shen, 2015; Kahn Jr et al., 2013).",
"Although it is difficult to gain a complete understanding of how dialogue systems affect the development of children, the most salient facts (e.g., children regarding virtual assistants as person-like) should be communicated to parents explicitly as part of parental controls.",
"We advocate for a kids mode to be included with these virtual AI assistants which would provide the feature of parental control in accordance with respect for autonomy.",
"This mode would be aware that it is talking to children and respond accordingly.",
"NLP can also help in selecting content and style appropriate for children in these AI agents.",
"Additionally, parents can be provided with fine-grained control over the topics, sources and language that would be generated by the agent.",
"For example, the parent can select for a polite language and topics related to science to support their child's development efforts.",
"Much research has focused on controlling topics (Kim et al., 2015; Jokinen et al., 1998), style (Niu and Bansal, 2018), content (Zhou et al., 2018; Zhao et al., 2020; Dinan et al., 2019) and persona (Zhang et al., 2018) of dialogue agents which can be used for this purpose.",
"So far we have discussed how NLP systems can be evaluated using ethical frameworks and how decisions made by such systems can be assisted by these theories.",
"NLP can also aid in making decisions in accordance with the deontological framework.",
"Recall that the generalization principle judges the ethical standing of pairs of actions and reasons; these pairs could be extracted with various NLP techniques from textual content.",
"In the case of flagging objectionable content (4.2), extracting the deeper intents and implications corresponds to the reasons for the action of flagging the content.",
"Another example is building an automatic institutional dialog act annotator for traffic police conversations (Prabhakaran et al., 2018).",
"These dialog acts contain the rationales of the two agents in the conversation: the police officer and the civilian stopped for breaking traffic rules.",
"The decision made by the police officer (the action) can then be judged to be in accordance (or not) with a human-selected set of ethically acceptable action and rationale pairs.",
"Similarly, for court hearing transcripts, the rationales of the arguments can be extracted and the verdict of the judge can be checked using them (Branting et al., 2020; Aletras et al., 2019).",
"NLP tools such as commonsense knowledge graph generation (Bosselut et al., 2019; Saito et al., 2018; Malaviya et al., 2019), semantic role labeling (Gildea and Jurafsky, 2000), open domain information extraction (Angeli and Manning, 2013) etc. can be used to extract rationales, entities from text and also find relations between them to better understand the underlying intent of the text.",
"We provide a broad discussion on the limitations of the principles chosen in this work and the issue of meta-ethics.",
"Moreover, we emphasize that ethical research is not merely a checklist to be satisfied by abiding to the principles mentioned here.",
"It requires our persistent attention and open-minded engagement with the problem.",
"One limitation of this work is in the principles that we choose.",
"6 For example, the interaction of machine learning and privacy is of huge ethical 6 Kant would argue that the generalization principle can account for all ethical decisions, but we make no such claim.",
"importance.",
"While the respect for autonomy may address this issue in part, it would be more productive to utilize a deontological principle to the effect of the right to privacy with which such matters can be judged.",
"Another instance is that in this work, we have not discussed the principle of interactional fairness (Bies, 2015, 2001) which refers to the quality of interpersonal treatment including respect, dignity, and politeness.",
"With the increasing amount of interaction between humans and machine, the natural language generation systems can be evaluated with this principle.",
"Systems which show respect and dignity to users as well as generate polite language can enhance the degree of interactional justice, which can in turn enhance utility (e.g., trust, satisfaction).",
"Additionally, there are broader limitations in using deontology as our ethical framework.",
"In scenarios where there are no a priori duties or rights, taking a consequentialist approach and optimizing the effects of ethical guidelines could be more felicitous.",
"For example, the specific rights and duties of autonomous AI systems are not immediately clear.",
"Thus, determining ethical recommendations based on what leads to the most responsible use of the technology would be clearer than selecting appropriate rights and duties directly.",
"Furthermore, rule-based formulations of consequentialism make ethical judgments based on rules, where the rules are selected based on the consequences.",
"Such theories combine some of the benefits of both deontology and consequentialism.",
"The above difficulties are part of the larger issue of metaethics, that is, the discussion and debate on how to choose among different ethical theories.",
"Within deontology, there is no one standard set of rules.",
"And even within the generalization principle, there is considerable leeway to what conceiv-able world or logically consistent mean and how they could be applied to decision making.",
"While presenting a universally accepted ethical theory is likely impossible, metaethical considerations can still be relevant, especially in light of the application of ethical theories.",
"As the field of NLP gets more accustomed with theories of ethics, it will be fruitful to compare the strengths and weaknesses of different ethical theories within the context of NLP and machine learning.",
"Two principles of deontological ethicsnamely the generalization principle and respect for autonomy via informed consent can be used to decide if an action is ethical.",
"Despite the limitations of these principles, they can provide useful insights into making NLP systems more ethical.",
"Through the four case studies discussed in this paper, we demonstrate how these principles can be used to evaluate the decisions made by NLP systems and to identify the missing aspects.",
"For each of the case studies, we also present potential directions for NLP research to move forward and make the system more ethical.",
"We further provide a summary on how NLP tools can be used to extract reasons and rationales from textual data which can potentially aid deontological decision making.",
"Note that we do not advocate deontological ethics as the only framework to consider.",
"On the contrary, we present this work as the first of its kind to illustrate why and how ethical frameworks should be used to evaluate NLP systems.",
"With this work, we hope the readers start thinking in two directions: (1) using different ethical frameworks and applying the principles to NLP systems (like the case studies in 4), and (2) exploring the directions mentioned in the case studies of this paper to improve current NLP systems.",
"We are grateful to the anonymous reviewers for their constructive feedback, and special thanks to Dirk Hovy for valuable discussions on this work.",
"This work was supported in part by ONR Grant N000141812861 and NSF IIS1763562.",
"This material is based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200 (author BB).",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government."
] | [
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"objective",
"method",
"method",
"abstain",
"method",
"objective",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"Dialogue systems are usually categorized into two types, open-domain and task-oriented.",
"The first one focuses on chatting with users and making them engage in the conversations, where selecting a proper topic to fit the dialogue context is essential for a successful dialogue.",
"The other one focuses on a specific task instead of casual talks, e.g., finding a movie on Friday night, playing a song.",
"These two directions have been studied separately due to their different purposes.",
"However, how to smoothly transition from social chatting to task-oriented dialogues is important for triggering the business opportunities, and there is no any pub-lic data focusing on such scenarios.",
"Hence, this paper focuses on investigating the conversations starting from open-domain social chatting and then gradually transitioning to task-oriented purposes, and releases a large-scale dataset with detailed annotations for encouraging this research direction.",
"To achieve this goal, this paper proposes a framework to automatically generate many dialogues without human involvement, in which any powerful open-domain dialogue generation model can be easily leveraged.",
"The human evaluation shows that our generated dialogue data has a natural flow at a reasonable quality, showing that our released data has a great potential of guiding future research directions and commercial activities.",
"Furthermore, the released models allow researchers to automatically generate unlimited dialogues in the target scenarios, which can greatly benefit semi-supervised and unsupervised approaches.",
"1 1 Introduction Until now, researchers have often separated open-domain and task-oriented dialogues as two distinct types of tasks in the dialogue field.",
"The publicly available datasets focuses on either open-domain 1 Our dataset, trained simulators, and annotations are available at: https://github.com/MiuLab/SalesBot .",
"or task-oriented dialogues.",
"For example, a lot of prior work focused on building open-domain dialogue systems (Li et al., 2017; Zhang et al., 2018; Adiwardana et al., 2020a), which chat with users via suitable, engaging, safe conversations.",
"With the capability of pre-trained models, a large set of human conversations is adopted to train their capability of free chatting (Zhang et al., 2020; Adiwardana et al., 2020b; Roller et al., 2021).",
"Although these models show the outstanding capability of communicating with human, they are not able to complete tasks as human assistants.",
"On the other hand, MultiWoz (Budzianowski et al., 2018; Hosseini-Asl et al., 2020) and Schema-Guided Dialogue (SGD) (Rastogi et al., 2020) are two popular large-scale datasets of task-oriented dialogues, which include plenty of multi-domain dialogues with state information to track users' behaviors.",
"In task-oriented scenarios, the users have their goals 6143 before starting the conversations, so the way we evaluate the system's performance is whether the system can successfully complete the users' goals.",
"However, both skills of social chatting and task-oriented dialogues are important and may be used in a single conversation.",
"Considering that both skills are essential for a human-like dialogue system, the recent work (Sun et al., 2021) merged those two capabilities by inserting chit-chat sentences into the existing task-oriented dialogue data.",
"The idea is to allow the agent gains more social, personalized communication skills when focusing on task-oriented dialogue generation.",
"Even the released data contains both social and task-oriented dialogues, each dialogue still focuses on a task-oriented scenario where the user has the goal before starting the conversation.",
"In our target scenarios as illustrated in Figure 1, the conversation starts without any specific goal in the user's mind, and the agent explores the potential task-oriented intents and smoothly transitions to a task-oriented conversation.",
"The focus of this paper is more similar to a salesperson's capability, where he/she needs to chat with the user and discovers the implicit task-oriented intents that fit the business purposes and navigates the user to complete a task, such as purchasing a product, reserving a restaurant, or booking a hotel room.",
"Hence, a new pipeline for constructing such data is proposed.",
"Each dialogue in the released dataset starts with discovering a potential task-oriented intent of a user in the social conversation and ends in completing a specific task.",
"Even though high-quality chit-chats and task-oriented dialogues can be separately generated shown in prior work (Hosseini-Asl et al., 2020; Adiwardana et al., 2020b; Roller et al., 2021), how to generate our desired dialogues has not been fully studied and remained unresolved.",
"Yu et al. (2017) built a dialogue framework for users not having a clear intention, where mixing social responses into the conversation guides the flow to a specific movie they want to promote.",
"Our paper has a similar idea about exploring the potential topics in the social conversations and then promoting the targeted tasks.",
"Although the prior work proposed the proper framework for the target scenarios, it required manual rules for dialogue strategies, making it difficult to scale.",
"Also, it only covers a single domain (movie) and there is no any publicly available data for following research work.",
"This paper covers more common topics by taking advantage of the existing natural language generation models trained on substantial dialogue datasets, and releases the first large-scale dialogue dataset with conversations naturally transitioning from chit-chats to task-oriented forms.",
"Our contributions can be summarized as 4-fold: We present a framework with a simulated user and a simulated salesperson to automatically generate dialogues that smoothly transitions from social chit-chats to task-oriented dialogues, where the components inside the framework can be easily replaced by any desired models for better flexibility.",
"Human evaluation on the generated dialogues demonstrates that the proposed method produces dialogues with reasonable quality and natural conversation flows.",
"We release the first large-scale dataset of dialogues transitioning from chit-chat to task-oriented scenarios, which contains the automatically generated dialogues and the detailed human annotations for enabling the future research work.",
"The released framework with both user and sales simulators allows researchers to generate unlimited dialogues for semi-supervised and unsupervised usage.",
"Figure 2 illustrates our proposed framework for constructing the dataset.",
"It can be divided into three main parts: (1) open-domain dialogue generation, (2) chit-chat to task-oriented transition, and (3) task-oriented dialogue (TOD) generation.",
"As shown in Figure 1, the conversations start with social chatting between users and salespersons.",
"To generate high-quality open-domain dialogues, the pre-trained dialogue generation models can be adopted.",
"Here we choose BlenderBot (Roller et al., 2021) as our pre-trained generation model due to its outstanding capability trained on the largest-ever open-domain data.",
"It shows the ability to be engaging, knowledgeable, and empathetic at a certain level by multi-tasking on the Blended Skill Talk (BST) dataset (Smith et al., 2020) with several different datasets blending.",
"to discuss in a real-world setting, we manipulate the user and the sales to have different personas in order to cover wide-range topics in our generated dialogues.",
"This can be easily implemented by the package ParlAI 2 (Miller et al., 2017), which allows us to build two BlenderBots to self-chat with each other in order to construct various dialogues involving different personas (Smith et al., 2020).",
"From a salesperson's perspective, how to capture the suitable timing and how to promote the target products/tasks are two main challenges.",
"This paper proposes two components to address the above issues; specifically, a task-oriented intent detector and a transition turn generator focus on capturing the suitable timing and deciding how to smoothly transition to the target task respectively.",
"To find out the good timing during social chatting, we focus on detecting whether the user currently has an implicit intent related to the target tasks.",
"In our case, an intent indicates what a user desires to do or what he/she is very likely to do if someone encourages him/her to do so.",
"If our intent detector is able to capture any task-oriented intent in the social content with diverse topics, it tells us the suitable timing for guiding the dialogue to a specific topic and then transition to a corresponding task-oriented conversation.",
"Table 1 shows the intents we focus on in this paper, and other desired intents can be easily extended by our approach.",
"Although detecting intents in task-oriented dialogues has been studied for long time, the intent detection models trained on task-oriented datasets cannot be directly utilized.",
"The reason is that the in-2 https://parl.ai Intent Description FindMovies find movies to watch GetTimesForMovie obtain the available time for watching a movie FindAttractions find attractions to visit LookupMusic find music to listen to PlaySong play songs LookupSong find songs to listen to Table 1: Descriptions of intents.",
"tents in our scenarios are different from the intents in classical task-oriented data, where former ones are more implicit and the latter ones are more explicit .",
"For example, a user utterance with the intent FindAttraction in our case may be I never visit France, but I heard that it is a good place. instead of Find me the landmarks in Paris. in classical task-oriented dialogue datasets.",
"Therefore, this paper proposes to leverage the powerful capability of question answering (QA) systems to identify the potential task-oriented intents in a zero-shot fashion (Namazifar et al., 2020).",
"Specifically, we use the pre-trained QA model and ask whether the user has a certain intent given the current dialogue.",
"The questions need to be designed for describing the target task-oriented intents, and we use the following ways to create the questions focusing on task-oriented intents.",
"3 1. Questions based on descriptions: we create questions associated with all intents based on their natural language descriptions, e.g. Is the intent asking about playing songs? for the intent PlaySong",
".",
"2. Paraphrased questions: to enhance the detection recall for open-domain dialogues, for each intent, we paraphrase the description-based questions via a high-quality paraphras-3 The manually-designed questions are listed in the Appendix A. 6145 I never visit France, but I heard that it is a good place.",
"The proposed intent detector is illustrated in Figure 3, where the inputs are the open-domain conversation along with intent-related questions, and the outputs are Yes/No answers to these questions.",
"We assume that a user has a task-oriented intent when the detector outputs Yes to the associated question.",
"Note that any type of QA models can be adopted in our framework.",
"Here we start with a QA model pre-trained on large open-domain QA data, e.g., SQuAD (Rajpurkar et al., 2018) or CommonsenseQA (Talmor et al., 2019), which is supposed to be equipped with certain common knowledge and the reasoning ability useful for our intent detector.",
"Furthermore, the general QA model may not be capable of correctly answering intent-related questions since the contexts and questions differ a lot from ones in the general QA data.",
"To reduce the mismatch, we fine-tune the QA model on a publicly available task-oriented dataset (e.g., SGD).",
"Specifically, the annotated intents in task-oriented dialogues are utilized to create the associated QA data, where there is a ground truth answer (Yes/No) to each intent-related question at all dialogue turns.",
"Then the built training data (TOD-QA shown in Figure 3) allows the general QA model to better identify task-oriented intents.",
"Although fine-tuned on the task-oriented dataset, we find that the model benefits from pre-training and thus it can be well applied to open-domain dialogues.",
"This section describes how we generate the transition turn that bridges open-domain and task-oriented dialogues.",
"Our transition turn generation procedure is composed of two parts: 1) using a template transition sentence to trigger the corresponding task-oriented user reaction and 2) re-generating the transition turn for better fluency and diversity.",
"Template-based For each task-oriented intent, we adapt its intent description in the ontology to create a corresponding template question (e.g., Do you want to [Intent Description]? ) as the transition sentence shown in the upper block of Figure 4.",
"Although using template-based transition is simple and effective, it however makes the salesperson too aggressive and invariant to be professional.",
"Generative-based To improve the fluency of transition and increase the diversity of word usage, we propose a generative-based approach to re-generate more smooth and nature transitions.",
"With a similar idea as (Ennen et al., 2021; Sevegnani et al., 2021), our goal is to predict a transition utterance that can naturally bridge the past and the future utterances as below.",
"Specifically, we feed the last user's open-domain utterance and the first user's task-oriented utterance in our generated data as inputs, and learn to predict the template transition turn.",
"To learn the capability of connecting different topics smoothly, the newly published data OTTers (Sevegnani et al., 2021) is leveraged for training our generative model.",
"This data focuses on bridging two different topics via the transition in an entity path of a commonsense knowledge graph.",
"The assumption of using this dataset is that open-domain utterances can be viewed as the previous topic and task-oriented utterances as the new one, so learning the transition turn 6146 is the same as learning how to smoothly transition from open-domain to task-oriented dialogues.",
"After detecting the potential task-oriented intent and generating the transition turn, it is natural to continue the dialogue in a task-oriented scenario illustrated in the right part of Figure",
"2. Here we propose two ways of generating task-oriented dialogues following the transition turn.",
"Merge SGD It is naive to simply merge an appropriate task-oriented dialogue taken from TOD data with a chit-chat dialogue to create such dialogue.",
"In more details, all task-oriented dialogues in the SGD dataset are grouped by intents, and one TOD dialogue is sampled based on the detected task-oriented intent to append to the transition turn and form a new dialogue containing both chit-chat and TOD.",
"Note that the delexical-ized version of SGD (Sun et al., 2021) is used to avoid severe inconsistency between open-domain and task-oriented parts.",
"Task-Oriented Simulation Different from open-domain social chatting, the roles in task-oriented dialogues are important.",
"Therefore, two task-oriented simulators are trained, one for users and another for salespersons.",
"Considering that training on task-oriented dialogues from scratch may limit the diversity of the generated dialogues, to generate the context-aware, fluent, and consistent conversations, we use the same type of open-domain dialogue generation models, BlenderBot (Roller et al., 2021), and additionally train on either user turns or agent turns in task-oriented dialogues for TOD User BlenderBot and TOD Sales BlenderBot.",
"By allowing two simulators to talk with each other, they can generate endless conversations until one of the termination conditions is satisfied.",
"There are three commonly used termination strategies we use when building our dataset: (1) Any pre-defined keyword appears in the utterance, e.g., bye .",
"(2) The sales simulator generates a special token representing the ending of a dialogue.",
"(3) When the dialogue starts to repeat itself, i.e., repeatedly producing the same utterances, because it usually means no more useful information.",
"The proposed framework enables us to construct a large-scale dataset with dialogues transitioning from open-domain to task-oriented scenarios, which align well with the salesperson's business potential.",
"We use a widely-used crowdsourcing platform, Amazon Mechanical Turk (AMT) 4 , to collect human feedback for our generated dialogues.",
"Intent Detector Our QA model is DistillBert (Sanh et al., 2020) pre-trained on the general QA data, SQuAD 2.0 (Rajpurkar et al., 2018), and then fine-tuned on TOD data, SGD.",
"The value of learning rate and batch size are 3e-5 and 64 respectively with AdamW optimizer (Loshchilov and Hutter, 2019) for 20 epochs.",
"Transition The T5 (T5-small) model is trained to generate transitions with a learning rate of 5e-5 with Adafactor optimizer (Shazeer and Stern, 2018) and batch size of 16.",
"We train our models for 5 epochs and select the model with lowest loss in the dev set.",
"During decoding, we mix top-K sampling of 80 and top-p (nucleus) sampling of 0.95 (Holtzman et al., 2020).",
"Dialogue Generation To generate task-oriented utterances, we train our two simulators on the model BlenderBot-400M 5 with the same training setting with generative transition.",
"During generation, we use top-K sampling (Fan et al., 2018), where K is set to 120 for encouraging more diversity in dialogues.",
"We randomly pick about 4,000 dialogues for two human evaluation tasks submit to AMT.",
"The first task is designed for collecting feedback about the entire dialogue, while the second one focuses on the transition part due to the main goal of this paper.",
"Table 2 shows the dialogue example presented to crowdworkers for evaluation.",
"Because our target dialogues can be viewed as salespersons' capability, we describe the context to the recruited crowdworkers that the given dialogues are between a beginner salesperson and his/her customer and ask workers to provide feedback from different aspects in terms of the sales' strategies.",
"Note that the annotators are not aware that the dialogues are machine-generated, so the collected feedback is for human sales communication skills.",
"Each task is briefly described below, and the full annotation guideline can be found in the Appendix B. 4 https://www.mturk.com/ 5 https://parl.ai/projects/blenderbot2/ 6147 Sales: Cigarettes are pretty bad for you, but they're so tasty!",
"Do you like to smoke in your free time?",
"User: Yes, I enjoy smoking in my free time.",
"I also enjoy lifting weights and squatting.",
"Sales: That's great!",
"I like to lift weights as well.",
"It's a great way to stay in shape.",
"User: I love to play video games.",
"I play a lot of League of Legends.",
"What about you?",
"Sales: Yes, it is. I enjoy it a lot.",
"What else do you enjoy doing in your spare time?",
"User: I enjoy playing video games as well!",
"I also like to go to the movies.",
"Sales: I usually watch a movie.",
"I found [count] movies.",
"What about [title], [title], or [title]?",
"[Transi-tion] User: I wanna find some Romance movies online.",
"Sales: What do you think about [title], [title] or [title]?",
"User: Is there anything else?",
"Violent movies would be nice.",
"I like the director Jia Zhangke.",
"Sales: There's [count] movie that might suit your interests.",
"What about [title]?",
"User: Ash Is Purest White sounds great.",
"I wanna watch that movie now.",
"I wanna watch it with subtitles.",
"Task 2: Chit-Chat to Task-Oriented Transition In addition to the entire dialogue, we also explicitly highlight the transition turn in the dialogue when presenting to crowdworkers.",
"Similarly to the first task but only focusing on the transition part, we asked workers to rate from 1 to 5 from the following aspects: Right Time (Q1Is it a good timing to make the transition?), Relevance (Q2Is the transition relevant to the conversation context?), Aggressiveness (Q3Is the transition aggressive?), and Overall (Q4Do you think it is overall a good transition?).",
"In each question, the detailed descriptions of all ratings are given to crowdworkers to ensure they have consistent understanding for all ratings.",
"In addition, to enrich the transition turns and ensure their quality, we generate 4 additional transitions and ask workers to choose the best one.",
"All transitions and ratings are included in our released data.",
"Task 3: Customer's Implicit Intent Considering that detecting potential intents plays an important role in our framework, we further investigate the influence of intent detectors.",
"To evaluate the performance of different detectors, crowdworkers are presented with a conversation snippet and the detected intent results from three detectors, and they are asked to rank the intents in terms of their relevance to the conversation.",
"Three evaluated detectors are: Detector1 pre-trained on SQuAD 2.0 (Section 3.1), Detector2 additionally pre-trained on SWAG (Zellers et al., 2018) and CommonsenseQA (Talmor et al., 2019), and Detector3 adapted from TransferQA (Lin et al., 2021), which learns dialogue state tracking knowledge from several general QA datasets.",
"We evaluate 1,500 conversation snippets, and three workers are recruited to rank intents for each snippet.",
"For brevity, we use T to denote Task in the following.",
"Each dialogue is evaluated by three crowdworkers so that we can check the annotation variance for reliable results.",
"Table 3 presents the statistics of the randomly sampled dialogues submitted to AMT.",
"The average length of chit-chat turns in Merge SGD and TOD Simulation are about 4.5.",
"The evaluation results of all dialogues are visualized in the top charts of Figure 5, and the bottom charts show the results for existing TOD data (Merge) and simulator-generated TOD (Simulator).",
"It can be observed that our framework is able to produce context-relevant task-oriented conversations to match the topic of open-domain dialogues (Q1 in T1; Q2 in T2).",
"This indicates that we can ensure the dialogue flow from open-domain to task-oriented dialogues is natural.",
"The median relevance scores are slightly higher than the Neutral line, sug-6148 Q1:Relevant Q2:Aggressive Q3:Overall 1 2 3 4 5 S c o r e Q1:Timing Q2:Relevant Q3:Aggressive Q4:Overall 1 2 3 4 5 S c o r e Q1:Relevant Q2:Aggressive Q3:Overall 1 2 3 4 5 S c o r e Merge Simulator Task 1: Conversation Evaluation Q1:Timing Q2:Relevant Q3:Aggressive Q4:Overall 1 2 3 4 5 S c o r e Merge Simulator Task 2: Transition Evaluation Figure 5: Score distribution of task 1 (left) and 2 (right).",
"gesting that our sales simulator can perform his sales strategy without annoying customers.",
"The observation further demonstrates the feasibility and effectiveness of our proposed method.",
"In terms of the salesperson's aggressiveness, crowdworkers think that the transition is neutral and somewhat aggressive, showing that smoothly transitioning is still an important research problem to explore.",
"Furthermore, the transition timing scores (Q1 in T2) also demonstrate that our proposed task-oriented intent detection can capture a suitable moment in a zero-shot setting, so that the sales may not miss any business opportunity of product promotion.",
"We can observe that most of overall scores (Q3 in T1; Q4 in T2) are above Neutral (Score 3) 6 , indicating that the generated dialogues and transitions are overall good for a salesperson's business perspective.",
"The human judgement demonstrates that our proposed approach is capable of simulating a large-scale reasonable dialogues aligned with our 6 The full description of each score is presented in Appendix B. purpose, implying that both research community and industries can greatly benefit from our released data and the built simulators that can continuously generate more data for training.",
"Our framework and the constructed dataset reduce the cost for large-scale data requirement for better practice.",
"To further investigate whether the proposed TOD simulators described in Section 2.3 can generate reasonable dialogues compared to Merge SGD , we visualize their individual scores as shown at the bottom of Figure",
"5. There is no significant difference between two groups, and we further investigate their score distribution of each question shown in Figure",
"6. Both results tell that given the context of open-domain utterances, our TOD simulators are able to generate the suitable task-oriented dialogues with comparable quality to those from the publicly available benchmark TOD dataSGD.",
"Consequently, our framework can be utilized to generate large-scale data cost-effectively and the generation quality is comparable with the current benchmark dialogue data.",
"Table 4 shows the average ranks of three detectors described in T3.",
"We find that Detector1 (pre-trained on SQuAD 2.0) and Detector2 (pre-trained on SQuad 2.0, SWAG, CommonsenseQA) perform almost the same, implying that simply pretraining on extra commonsense-related QA data may not significantly improve the ability of detecting implicit intents.",
"Possible reasons may be either that these datasets include quite similar knowledge about our target intents, or our zero-shot QA model reaches its capacity bottleneck.",
"How to better utilize commonsense knowledge for detecting potential intents can be further investigated in the future.",
"Lin et al. (2021) has demonstrated Detector3 (trained on several QA datasets) is able to achieve decent dialogue state tracking performance in zero-shot settings.",
"Therefore, we did not fine-tune it on the task-oriented datasets such as SGD Detector1 and Detector2 are fine-tuned on.",
"However, according to its average rank, Detector3 is significantly worse than other detectors.",
"Probably because the intents in chit-chat conversations are more implicit and complex than task-oriented intents, the ability of detecting implicit intents cannot be easily transferred.",
"In addition to the proposed framework and the released dataset, our collected human judgement has the potential of providing valuable contributions to",
"dialogue community and industrial products.",
"Each question along with its corresponding scores can be treated as a interested task, and we briefly describe some (but not limited to) examples of crowd-sourced data usage.",
"The human scores from T1 can be formulated as classification or regression annotations which measure the relevance between a recommended product and a conversation context, whether a salesperson in a dialogue is too aggressive, or the overall quality of a sales dialogue.",
"Similarly, we can apply these ideas to T2, which focuses on evaluating transitions.",
"Particularly, deciding when is a good to perform a transition can be an interesting topic for future research.",
"This will also benefit industries to develop more intelligent dialogue systems interacting with customers.",
"Moreover, the rank annotations provided by workers from T3 can be considered as high-quality data for training a ranking model or an intent detector.",
"Apart from this, the data can also be utilized as a gold standard to assess the performance of different algorithms predicting user implicit intents.",
"We expect these examples will inspire the community and industries to discover more interesting research directions and applications.",
"Our work is related to dataset construction for building persuasive dialogue systems that try to persuade the participant to take a specific action.",
"Hiraoka et al. (2014) annotated 34 dialogues, in which an experienced salesperson tries to convince a customer to buy a camera.",
"Yoshino et al. (2018) requested crowdsourcing workers to generate 200 persuasive dialogues.",
"In each dialogue, one participant persuaded another one to adopt his suggestion such as cleaning a room.",
"Wang et al. (2019) collected 1017 dialogues, in which one of the participants was convinced to donate to a specific charity.",
"We can see that the covered conversation scenarios in these datasets were strictly limited to specific tasks, while our scenarios are more general and can be easily extended to different cases.",
"Also, our constructed dataset is about three times larger than the prior work, indicating the usefulness of the recent pre-trained paradigm.",
"The topic of conversational recommendation systems is also related to our work.",
"A number of attempts have been made to collect training data for 6150 conversational recommendation systems.",
"These studies (Wu et al., 2019; Zhou et al., 2020; Xu et al., 2020) first extracted a path consisting of an entity or attribute nodes from a knowledge base.",
"Then they asked annotators to write conversational recommendation dialogues.",
"The flow of mentioned topics in a dialogue should follow the extracted path.",
"Similarly, Liu et al. (2020) also built a dataset by asking human workers to create dialogues based on a topic path.",
"It should be noted that, in these datasets, the goal of such systems is to only make entity recommendations instead of tasks , while our work goes beyond them in naturally transferring from chit-chat to task-oriented dialogues and completing a task the user may want.",
"Another related work is generating a transition between two given open-domain utterances.",
"Tang et al. (2019) proposed to generate the transition conditional on a specific word, because they want the generated transition can drive the conversation topic to the specified word.",
"Sevegnani et al. (2021) collected a new dataset of human-created one-turn topic transitions.",
"Each dialogue contains 2 utterances with different topics and 1 transition in the middle of them.",
"There are some recent studies trying to merge chit-chat and task-oriented dialogues, but the purposes of merged dialogues differ from ours.",
"Sun et al. (2021) enhanced the utterances in task-oriented dialogues by appending chit-chat sentences.",
"They hope that the agent gains more social, personalized, and engaging communication skills.",
"Ennen et al. (2021) proposed a dialogue system that can transfer the style of generated response from chit-chat to task-oriented styles.",
"However, the system is a prototype model, there is still a large gap to properly bridge chitchat and task-oriented dialogues.",
"The motivation of our work is closely similar to the studies by Yu et al. (2017) and Young et al. (2022).",
"Yu et al. (2017) manually created several task-oriented response generation strategies specifically designed for the movie promotion scenario.",
"In addition, the expert knowledge was utilized to design reinforcement learning rewards that help their dialogue system to decide which action to take (i.e., continuing chit-chat or selecting a task-oriented strategy to reply).",
"In order to fuse open-domain and task-oriented dialogues to a complete and natural conversation, Young et al. (2022) manually rewrote existing task-oriented utterances and added new open-domain conversations.",
"The most crucial difference between their work and ours is that, in their dialogues, the user explicitly expressed his/her intentions indicating clear clues about when and how to naturally transit from chitchat to task-oriented conversations, while our user intentions are implicit which makes detection and transition more challenging.",
"However, we also observe that the prior work in these studies heavily relied on human efforts (data collection, expert-created strategies, etc.).",
"Therefore, it can be expensive and hard to extend their data or method the practical cases due to the requirement of larger-scale training data.",
"Our proposed framework benefits from the pre-trained models and shows its outstanding conversational capability.",
"The flexibility of extending to diverse cases is also validated, considering that all components inside the framework can be easily substituted by the updated models, and the generated data can be used by semi-supervised or unsupervised methods for cold-start scenarios.",
"This paper proposes a novel framework to generate dialogues that naturally transition from open-domain to task-oriented scenarios at a large scale without heavy human efforts.",
"Our proposed chitchat to task-oriented transition approach can capture the suitable timing when the user shows the implicit intents and generate the diverse and natural transition turn to trigger the task-oriented utterances.",
"Our human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view.",
"The released dataset and framework empowers research community to easily obtain large-scale target dialogues and the human annotated scores can be utilized for related work.",
"This paper has a great potential of guiding future research directions and benefiting the community of both research and industry.",
"We thank reviewers for their insightful comments.",
"This work was financially supported from MediaTek Research, Amazon AWS Machine Learning Research Awards, and the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 111-2628-E-002-016."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"objective",
"other",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"The modeling of conversational context plays a vital role in emotion recognition from conversation (ERC).",
"In this paper, we put for-ward a novel idea of encoding the utterances with a directed acyclic graph (DAG) to better model the intrinsic structure within a conversation, and design a directed acyclic neural network, namely DAG-ERC 1 , to implement this idea.",
"In an attempt to combine the strengths of conventional graph-based neural models and recurrence-based neural models, DAG-ERC provides a more intuitive way to model the information flow between long-distance conversation background and nearby context.",
"Extensive experiments are conducted on four ERC benchmarks with state-of-the-art models employed as baselines for comparison.",
"The empirical results demonstrate the superiority of this new model and confirm the motivation of the directed acyclic graph architecture for ERC.",
"Utterance-level emotion recognition in conversation (ERC) is an emerging task that aims to identify the emotion of each utterance in a conversation.",
"This task has been recently concerned by a considerable number of NLP researchers due to its potential applications in several areas, such as opinion mining in social media (Chatterjee et al., 2019) and building an emotional and empathetic dialog system (Majumder et al., 2020).",
"The emotion of a query utterance is likely to be influenced by many factors such as the utterances spoken by the same speaker and the surrounding conversation context.",
"Indeed, how to model the conversational context lies at the heart of this task (Poria et al., 2019a).",
"Empirical evidence also shows Corresponding author.",
"that a good representation of conversation context significantly contributes to the model performance, especially when the content of query utterance is too short to be identified alone (Ghosal et al., 2019).",
"Numerous efforts have been devoted to the modeling of conversation context.",
"Basically, they can be divided into two categories: graph-based methods (Zhang et al., 2019a; Ghosal et al., 2019; Zhong et al., 2019; Ishiwatari et al., 2020; Shen et al., 2020) and recurrence-based methods (Hazarika et al., 2018a; Hazarika et al., 2018b; Majumder et al., 2019; Ghosal et al., 2020).",
"For the graph-based methods, they concurrently gather information of the surrounding utterances within a certain window, while neglecting the distant utterances and the sequential information.",
"For the recurrence-based methods, they consider the distant utterances and sequential information by encoding the utterances temporally.",
"However, they tend to update the query utterance's state with only relatively limited information from the nearest utterances, making them difficult to get a satisfying performance.",
"According to the above analysis, an intuitively better way to solve ERC is to allow the advantages of graph-based methods and recurrence-based models to complement each other.",
"This can be achieved by regarding each conversation as a directed acyclic graph (DAG).",
"As illustrated in Figure 1, each utterance in a conversation only receives information from some previous utterances and cannot propagate information backward to itself and its predecessors through any path.",
"This characteristic indicates that a conversation can be regarded as a DAG.",
"Moreover, by the information flow from predecessors to successors through edges, DAG can gather information for a query utterance from both the neighboring utterances and the remote utterances, which acts like a combination of graph structure and recurrence structure.",
"Thus, we speculate that DAG is a more appropriate and reasonable way than graph-based structure and recurrence-based structure to model the conversation context in ERC.",
"In this paper, we propose a method to model the conversation context in the form of DAG.",
"Firstly, rather than simply connecting each utterance with a fixed number of its surrounding utterances to build a graph, we propose a new way to build a DAG from the conversation with constraints on speaker identity and positional relations.",
"Secondly, inspired by DAGNN (Thost and Chen, 2021), we propose a directed acyclic graph neural network for ERC, namely DAG-ERC.",
"Unlike the traditional graph neural networks such as GCN (Kipf and Welling, 2016) and GAT (Velickovic et al., 2017) that aggregate information from the previous layer, DAG-ERC can recurrently gather information of predecessors for every utterance in a single layer, which enables the model to encode the remote context without having to stack too many layers.",
"Besides, in order to be more applicable to the ERC task, our DAG-ERC has two improvements over DAGNN: (1) a relation-aware feature transformation to gather information based on speaker identity and (2) a contextual information unit to enhance the information of historical context.",
"We conduct extensive experiments on four ERC benchmarks and the results show that the proposed DAG-ERC achieves comparable performance with the state-of-the-art models.",
"Furthermore, several studies are conducted to explore the effect of the proposed DAG structure and the modules of DAG-ERC.",
"The contributions of this paper are threefold.",
"First, we are the first to consider a conversation as a directed acyclic graph in the ERC task.",
"Second, we propose a method to build a DAG from a conversation with constraints based on the speaker identity and positional relations.",
"Third, we propose a directed acyclic graph neural network for ERC, which takes DAGNN as its backbone and has two main improvements designed specifically for ERC.",
"Recently, several ERC datasets with textual data have been released (Busso et al., 2008; Schuller et al., 2012; Zahiri and Choi, 2017; Li et al., 2017; Chen et al., 2018; Poria et al., 2019b), arousing the widespread interest of NLP researchers.",
"In the following paragraphs, we divide the related works into two categories according to the methods they use to model the conversation context.",
"Graph-based Models DialogGCN (Ghosal et al., 2019) treats each dialog as a graph in which each utterance is connected with the surrounding utterances.",
"RGAT (Ishiwatari et al., 2020) adds positional encodings to DialogGCN.",
"ConGCN (Zhang et al., 2019a) regards both speakers and utterances as graph nodes and makes the whole ERC dataset a single graph.",
"KET (Zhong et al., 2019) uses hierarchical Transformers (Vaswani et al., 2017) with external knowledge.",
"DialogXL (Shen et al., 2020) improves XLNet (Yang et al., 2019) with enhanced memory and dialog-aware self-attention.",
"2 Recurrence-based Models In this category, ICON (Hazarika et al., 2018a) and CMN (Hazarika et al., 2018b) both utilize gated recurrent unit (GRU) and memory networks.",
"HiGRU (Jiao et al., 2019) contains two GRUs, one for utterance encoder and the other for conversation encoder.",
"DialogRNN (Majumder et al., 2019) is a recurrence-based method that models dialog dynamics with several RNNs.",
"COSMIC (Ghosal et al., 2020) is the latest model, which adopts a network structure very close to DialogRNN and adds external commonsense knowledge to improve performance.",
"Directed acyclic graph is a special type of graph structure that can be seen in multiple areas, for example, the parsing results of source code (Alla-manis et al., 2018) and logical formulas (Crouse",
"2 We regard KET and DialogXL as graph-based models because they both adopt Transformer in which self-attention can be viewed as a fully-connected graph in some sense.",
"et al., 2019).",
"A number of neural networks that employ DAG architecture have been proposed, such as Tree-LSTM (Tai et al., 2015), DAG-RNN(Shuai et al., 2016), D-VAE (Zhang et al., 2019b), and DAGNN (Thost and Chen, 2021).",
"DAGNN is different from the previous DAG models in the model structure.",
"Specifically, DAGNN allows multiple layers to be stacked, while the others have only one single layer.",
"Besides, instead of merely carrying out naive sum or element-wise product on the predecessors' representations, DAGNN conducts information aggregation using graph attention.",
"In ERC, a conversation is defined as a sequence of utterances { u 1 , u 2 , ..., u N } , where N is the number of utterances.",
"Each utterance u i consists of n i tokens, namely u i = { w i 1 , w i 2 , ..., w in i } .",
"A discrete value y i S is used to denote the emotion label of u i , where S is the set of emotion labels.",
"The speaker identity is denoted by a function p ( ) .",
"For example, p ( u i ) P denotes the speaker of u i and P is the collection of all speaker roles in an ERC dataset.",
"The objective of this task is to predict the emotion label y t for a given query utterance u t based on dialog context { u 1 , u 2 , ..., u N } and the corresponding speaker identity.",
"We design a directed acyclic graph (DAG) to model the information propagation in a conversation.",
"ADAG is denoted by G = ( V , E , R ) .",
"In this paper, the nodes in the DAG are the utterances in the conversation, i.e., V = { u 1 , u 2 , ..., u N } , and the edge ( i, j, r ij ) E represents the information propagated from u i to u j , where r ij R is the relation type of the edge.",
"The set of relation types of edges, R = { 0 , 1 } , contains two types of relation: 1 for that the two connected utterances are spoken by the same speaker, and 0 for otherwise.",
"We impose three constraints to decide when an utterance would propagate information to another, i.e., when two utterances are connected in the DAG: Direction: j > i, ( j, i, r ji ) / E .",
"A previous utterance can pass message to a future utterance, but a future utterance cannot pass message backwards.",
"Remote information: < i, p ( u ) = p ( u i ) , ( , i , r i ) E and j < , ( j, i, r ji ) / E .",
"For each utterance u i except the first one, there is a previous utterance u that is spoken by the same speaker as Algorithm 1 Building a DAG from a Conversation Input: the dialog { u 1 , u 2 , ..., u N } , speaker identity p ( ) , hyper-parameter Output: G = ( V , E , R ) 1: V { u 1 , u 2 , ..., u N } 2: E 3: R { 0 , 1 } 4: for all i { 2 , 3 , ..., N } do 5: c 0 6: i 1 7: while > 0 and c < do 8: if p ( u ) = p ( u i ) then 9: E E { ( , i, 1) } 10: c c + 1 11: else 12: E E { ( , i, 0) } 13: end if 14: 1 15: end while 16: end for 17: return G = ( V , E , R ) u i .",
"The information generated before u is called remote information, which is relatively less important.",
"We assume that when the speaker speaks u , she/he has been aware of the remote information before u .",
"That means, u has included the remote information and it will be responsible for propagating the remote information to u i .",
"Local information: l, < l < i, ( l, i, r li ) E .",
"Usually, the information of the local context is important.",
"Consider u and u i defined in the second constraint.",
"We assume that every utterance u l in between u and u i contains local information, and they will propagate the local information to u i .",
"The first constraint ensures the conversation to be a DAG, and the second and third constraints indicate that u is the cut-off point of remote and local information.",
"We regard u as the -th latest utterance spoken by p ( u i ) before u i , where is a hyper-parameter.",
"Then for each utterance u l in between u and u i , we make a directed edge from u l to u i .",
"We show the above process of building a DAG in Algorithm",
"1. An example of the DAG is shown in Figure",
"2. In general, our DAG has two main advancements compared to the graph structures developed in previous works (Ghosal et al., 2019; Ishiwatari et al., 2020): First, our DAG doesn't have edges from future utterances to previous utterances, which we Figure 2: An example DAG built from a three-party conversation, with = 1 .",
"argue is more reasonable and realistic, as the emotion of a query utterance should not be influenced by the future utterances in practice.",
"Second, our DAG seeks a more meaningful u for each utterance, rather than simply connecting each utterance with a fixed number of surrounding utterances.",
"In this section, we introduce the proposed D irected A cyclic G raph Neural Network for ERC (DAG-ERC).",
"The framework is shown in Figure",
"3. 3.3.1 Utterance Feature Extraction DAG-ERC regards each utterance as a graph node, the feature of which can be extracted by a pre-trained Transformer-based language model.",
"Following the convention, the pre-trained language model is firstly fine-tuned on each ERC dataset, and its parameters are then frozen while training DAG-ERC.",
"Following Ghosal et al. (2020), we employ RoBERTa-Large (Liu et al., 2019), which has the same architecture as BERT-Large (Devlin et al., 2018), as our feature extractor.",
"More specifically, for each utterance u i , we prepend a special token [ CLS ] to its tokens, making the input a form of { [ CLS ] , w i 1 , w i 2 , ..., w in i } .",
"Then, we use the [ CLS ] 's pooled embedding at the last layer as the feature representation of u i .",
"Before introducing the DAG-ERC layers in detail, we first briefly describe graph-based models, recurrence-based models and directed acyclic graph models to help understand their differences.",
"For each node at each layer, graph-based models (GNN) aggregate the information of its neighboring nodes at the previous layer as follows: H li = f ( Aggregate ( { H l 1 j | j N i } ) , H l 1 i ) , (1) where f ( ) is the information processing function, Aggregate ( ) is the information aggregation function to gather information from neighboring nodes, and N i denotes the neighbours of the i -th node.",
"Recurrence-based models (RNN) allow information to propagate temporally at the same layer, while the i -th node only receives information from the ( i 1) -th node: H li = f ( H li 1 , H l 1 i ) .",
"Directed acyclic graph models (DAGNN) work like a combination of GNN and RNN.",
"They aggregate information for each node in temporal order, and allow all nodes to gather information from neighbors and update their states at the same layer: H li = f ( Aggregate ( { H lj | j N i } ) , H l 1 i ) .",
"The strength of applying DAGNN to ERC is relatively apparent: By allowing information to propagate temporally at the same layer, DAGNN can get access to distant utterances and model the information flow throughout the whole conversation, which is hardly possible for GNN.",
"Besides, DAGNN gathers information from several neighboring utterances, which sounds more appealing than RNN as the latter only receives information from the ( i 1) -th utterance.",
"Our proposed DAG-ERC is primarily inspired by DAGNN (Thost and Chen, 2021), with novel improvements specially made for emotion recognition in conversation.",
"At each layer l of DAG-ERC, due to the temporal information flow, the hidden state of utterances should be computed recurrently from the first utterance to the last one.",
"For each utterance u i , the attention weights between u i and its predecessors are calculated by using u i 's hidden state at the ( l 1) -th layer to attend to the predecessors' hidden states at l -th layer: lij = Softmax j N i ( W l [ H lj (cid:107) H l 1 i ]) (4) where W l are trainable parameters and (cid:107) denotes the concatenation operation.",
"The information aggregation operation in DAG-ERC is different from that in DAGNN.",
"Instead of merely gathering information according to the attention weights, inspired by R-GCN (Schlichtkrull et al., 2018), we apply a relation-aware feature Figure 3: The framework of Directed Acyclic Graph Neural Network for ERC (DAG-ERC).",
"transformation to make full use of the relational type of edges: M li = (cid:88) j N i ij W lr ij H lj , (5) where W lr ij { W l 0 , W l 1 } are trainable parameters for the relation-aware transformation.",
"After the aggregated information M li is calculated, we make it interact with u i 's hidden state at the previous layer H l 1 i to obtain the final hidden state of u i at the current layer.",
"In DAGNN, the final hidden state is obtained by allowing M li to control information propagation of H l 1 i to the l -th layer with a gated recurrent unit (GRU): (cid:101) H li = GRU lH ( H l 1 i , M li ) , (6) where H l 1 i , M li , and (cid:101) H li are the input, hidden state and output of the GRU, respectively.",
"We refer to the process in Equation 6 as nodal information unit , because it focuses on the node information propagating from the past layer to the current layer.",
"Nodal information unit may be suitable for the tasks that DAGNN is originally designed to solve.",
"However, we find that only using nodal information unit is not enough for ERC, especially when the query utterance u i 's emotion should be derived from its context.",
"The reason is that in DAGNN, the information of context M li is only used to control the propagation of u i 's hidden state, and under this circumstance, the information of context is not fully leveraged.",
"Therefore, we design another GRU called contextual information unit to model the information flow of historical context through a single layer.",
"In the contextual information unit, the roles of H i 1 i and M li in GRU are reversed, i.e., H i 1 i controls the propagation of M li : C li = GRU lM ( M li , H l 1 i ) .",
"(7) The representation of u i at the l -th layer is the sum of (cid:101) H li and C li : H li = (cid:101) H li + C li .",
"We take the concatenation of u i 's hidden states at all DAG-ERC layers as the final representation of u i , and pass it through a feed-forward neural network to get the predicted emotion:",
"H i = (cid:107) Ll =0 H li , (9) z i = ReLU ( WHH i + b H ) , (10) P i = Softmax ( W z z i + b z ) , (11) (cid:98) y i = Argmax k S ( P i [ k ]) .",
"(12)",
"where M is the number of training conversations, N i is the number of utterances in the i -th conversation, y i,t is the ground truth label, and is the collection of trainable parameters of DAG-ERC.",
"We conduct hyper-parameter search for our proposed DAG-ERC on each dataset by hold-out validation with a validation set.",
"The hyper-parameters to search include learning rate, batch size, dropout rate, and the number of DAG-ERC layers.",
"For the that is described in 3.2, we let = 1 for the overall performance comparison by default, but we report the results with varying from 1 to 3 in 5.2.",
"For other hyper-parameters, the sizes of all hidden vectors are equal to 300, and the feature size for the RoBERTa extractor is 1024.",
"Each training and testing process is run on a single RTX 2080 Ti GPU.",
"Each training process contains 60 epochs and it costs at most 50 seconds per epoch.",
"The reported results of our implemented models are all based on the average score of 5 random runs on the test set.",
"statistics of them are shown in Table",
"1. IEMOCAP (Busso et al., 2008): A multimodal ERC dataset.",
"Each conversation in IEMOCAP comes from the performance based on script by two actors.",
"Models are evaluated on the samples with 6 types of emotion, namely neutral , happiness , sadness , anger , frustrated , and excited .",
"Since this dataset has no validation set, we follow Shen et al. (2020) to use the last 20 dialogues in the training set for validation.",
"MELD (Poria et al., 2019b): A multimodal ERC dataset collected from the TV show Friends .",
"There are 7 emotion labels including neutral , happiness , surprise , sadness , anger , disgust , and fear .",
"DailyDialog (Li et al., 2017): Human-written dialogs collected from communications of English learners.",
"7 emotion labels are included: neutral , happiness , surprise , sadness , anger , disgust , and fear .",
"Since it has no speaker information, we consider utterance turns as speaker turns by default.",
"MELD in the choice of scenes and emotion labels.",
"The emotion labels of this dataset include neutral , sad , mad , scared , powerful , peaceful , and joyful .",
"We utilize only the textual modality of the above datasets for the experiments.",
"For evaluation metrics, we follow Ishiwatari et al. (2020) and Shen et al. (2020) and choose micro-averaged F1 excluding the majority class (neutral) for DailyDialog and weighted-average F1 for the other datasets.",
"We compared our model with the following baselines in our experiments:",
"Recurrence-based methods: DialogueRNN (Ma-jumder et al., 2019), DialogRNN-RoBERTa (Ghosal et al., 2020), and COSMIC without external knowledge 3 (Ghosal et al., 2020).",
"Graph-based methods: DialogurGCN (Ghosal et al., 2019), KET (Zhong et al., 2019), DialogXL (Shen et al., 2020) and RGAT (Ishiwatari et al., 2020).",
"Feature extractor: RoBERTa (Liu et al., 2019).",
"Previous models with our extracted features: DialogueGCN-RoBERTa, RGAT-RoBERTa and DAGNN (Thost and Chen, 2021) 4 .",
"Ours: DAG-ERC.",
"The overall results of all the compared methods on the four datasets are reported in Table",
"2. We can note from the results that our proposed DAG-ERC achieves competitive performances across the four datasets and reaches a new state of the art on the IEMOCAP, DailyDialog and EmoryNLP datasets.",
"As shown in the table, when the feature extracting method is the same, graph-based models generally outperform recurrence-based models on IEMOCAP, DailyDialog, and EmoryNLP.",
"This phenomenon indicates that recurrence-based models cannot encode the context as effectively as graph-based models, especially for the more important local context.",
"What's more, we see a significant improvement of DAG-ERC over the graph-based 3 In this paper, we compare our DAG-ERC with COSMIC without external knowledge, rather than the complete COS-MIC, in order to make a clearer comparison on the model architecture, even though our DAG-ERC outperforms the complete COSMIC on IEMOCAP, DailyDialog and EmoryNLP.",
"4 DAGNN is not originally designed for ERC, so we apply our DAG building method and the extracted feature for it.",
"models on IEMOCAP, which demonstrates DAG-ERC's superior ability to capture remote information given that the dialogs in IEMOCAP are much longer (almost 70 utterances per dialog).",
"On MELD, however, we observe that neither graph-based models nor our DAG-ERC outperforms the recurrence-based models.",
"After going through the data, we find that due to the data collection method (collected from TV shows), sometimes two consecutive utterances in MELD are not coherent.",
"Under this circumstance, graph-based models' advantage in encoding context is not that important.",
"Besides, the graph-based models see considerable improvements when implemented with the powerful feature extractor RoBERTa.",
"In spite of this, our DAG-ERC consistently outperforms these improved graph-based models and DAGNN, con-firming the superiority of the DAG structure and the effectiveness of the improvements we make to build DAG-ERC upon DAGNN.",
"In this section, we investigate how the structure of DAG would affect our DAG-ERC's performance by applying different DAG structures to DAG-ERC.",
"In addition to our proposed structure, we further define three kinds of DAG structure: (1) sequence, in which utterances are connected one by one; (2) DAG with single local information, in which each utterance only receives local information from its nearest neighbor, and the remote information remains the same as our DAG; (3) common DAG, in which each utterance is connected with previous utterances.",
"Note that if there are only two speakers taking turns to speak in a dialog, then our DAG is equivalent to common DAG with = 2 , making the comparison less meaningful.",
"Therefore, we conduct the experiment on EmoryNLP, where there are usually multiple speakers in one dialog, and the DAG # Preds F1 score Sequence 0.92 37.57 Single local information 1.66 38.22 Common = 2 1.78 38.30 Common = 4 3.28 38.34 Common = 6 4.50 38.48 Ours = 1 2.69 39.02 Ours = 2 4.46 38.90 Ours = 3 5.65 38.94 Table 3: Different DAGs applied to DAG-ERC.",
"speakers speak in arbitrary order.",
"The test performances are reported in Table 3, together with the average number of each utterance's predecessors.",
"Several instructive observations can be made from the experimental results.",
"Firstly, the performance of DAG-ERC drops significantly when equipped with the sequence structure.",
"Secondly, our proposed DAG structure has the highest performance among the DAG structures.",
"Considering our DAG with = 2 and common DAG with = 6 , with very close numbers of predecessors, our DAG still outperforms the common DAG by a certain margin.",
"This indicates that the constraints based on speaker identity and positional relation are effective inductive biases, and the structure of our DAG is more suitable for the ERC task than rigidly connecting each utterance with a fixed number of predecessors.",
"Finally, we find that increasing the value of may not contribute to the performance of our DAG, and = 1 tends to be enough.",
"To study the impact of the modules in DAG-ERC, we evaluate DAG-ERC by removing relation-aware feature transformation, the nodal information unit, and the contextual information unit individually.",
"The results are shown in Table 4.",
"As shown in the table, removing the relation-aware feature transformation causes a sharp performance drop on IEMOCAP and DailyDialog, while a slight drop on MELD and EmoryNLP.",
"Note that there are only two speakers per dialog Method IEMOCAP MELD DailyDialog EmoryNLP DAG-ERC 68.03 63.65 59.33 39.02 w/o rel-trans 64.12 ( 3.91) 63.29 ( 0.36) 57.12 ( 2.21) 38.87 ( 0.15) w/o (cid:101) H 66.19 ( 1.84) 63.17 ( 0.48) 58.05 ( 1.28) 38.54 ( 0.48) w/o C 66.32 ( 1.71) 63.36 ( 0.29) 58.90 ( 0.43) 38.50 ( 0.52) Table 4: Results of ablation study on the four datasets, with rel-trans , (cid:101) H , and C denoting relation-aware feature transformation, nodal information unit, and contextual information unit, respectively.",
"in IEMOCAP and DailyDialog, and there are usually more than two speakers in dialogs of MELD and EmoryNLP.",
"Therefore, we can infer that the relation of whether two utterances have the same speaker is sufficient for two-speaker dialogs, while falls short in the multi-speaker setting.",
"Moreover, we find that on each dataset, the performance drop caused by ablating nodal information unit is similar to contextual information unit, and all these drops are not that critical.",
"This implies that either the nodal information unit or contextual information unit is effective for the ERC task, while combining the two of them can yield further performance improvement.",
"According to the model structure introduced in Section 3.3.2, the only way for GNNs to receive information from a remote utterance is to stack many GNN layers.",
"However, it is well known that stacking too many GNN layers might cause performance degradation due to over-smoothing (Kipf and Welling, 2016).",
"We investigate whether the same phenomenon would happen when stacking many DAG-ERC layers.",
"We conduct an experiment on IEMOCAP and plot the test result by different numbers of layers in Figure 4, with RGAT-RoBERTa and DAGNN as baselines.",
"As illustrated in the figure, RGAT suffers a significant performance degradation after the number of layers exceeds 6.",
"While for DAGNN and DAG-ERC, with the number of layers changes, both of their performances fluctuate in a relatively narrow range, indicating that over-smoothing tends not to happen in the directed acyclic graph networks.",
"After going through the prediction results on the four datasets, we find that our DAG-ERC fails to distinguish between similar emotions very well, such as frustrated vs anger , happiness vs excited , scared vs mad , and joyful vs peaceful .",
"This kind of mistake is also reported by Ghosal et al. (2019).",
"Besides, we find that DAG-ERC tends to misclassify samples of other emotions to neutral on MELD, DailyDialog and EmoryNLP due to the majority proportion of neutral samples in these datasets.",
"We also look closely into the emotional shift issue, which means the emotions of two consecutive utterances from the same speaker are different.",
"Existing ERC models generally work poorly in emotional shift.",
"As shown in Table 5, our DAG-ERC also fails to perform better on the samples with emotional shift than that without it, though the performance is still better than previous models.",
"For example, the accuracy of DAG-ERC in the case of emotional shift is 57.98% on the IEMOCAP dataset, which is higher than 52.5% achieved by DialogueRNN (Majumder et al., 2019) and 55% achieved by DialogXL (Shen et al., 2020).",
"In this paper, we presented a new idea of modeling conversation context with a directed acyclic graph (DAG) and proposed a directed acyclic graph neural network, namely DAG-ERC, for emotion recognition in conversation (ERC).",
"Extensive experiments were conducted and the results show that the proposed DAG-ERC achieves comparable performance with the baselines.",
"Moreover, by comprehensive evaluations and ablation study, we confirmed the superiority of our DAG-ERC and the impact of its modules.",
"Several conclusions can be drawn from the empirical results.",
"First, the DAG structures built from conversations do affect the performance of DAG-ERC, and with the constraints on speaker identity and positional relation, the proposed DAG structure outperforms its variants.",
"Second, the widely utilized graph relation type of whether two utterances have the same speaker is insufficient for multi-speaker conversations.",
"Third, the directed acyclic graph network does not suffer over-smoothing as easily as GNNs when the number of layers increases.",
"Finally, many of the errors misjudged by DAG-ERC can be accounted for by similar emotions, neutral samples and emotional shift.",
"These reasons have been partly mentioned in previous works but have yet to be solved, which are worth further investigation in future work.",
"We thank the anonymous reviewers.",
"This paper was supported by the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2017ZT07X355)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"We introduce S2ORC, 1 a large corpus of 81.1M English-language academic papers spanning many academic disciplines.",
"The corpus consists of rich metadata, paper abstracts, resolved bibliographic references, as well as structured full text for 8.1M open access papers.",
"Full text is annotated with automatically-detected inline mentions of citations, figures, and tables, each linked to their corresponding paper objects.",
"In S2ORC, we aggregate papers from hundreds of academic publishers and digital archives into a unified source, and create the largest publicly-available collection of machine-readable academic text to date.",
"We hope this resource will facilitate research and development of tools and tasks for text mining over academic text.",
"Academic papers are an increasingly important textual domain for natural language processing (NLP) research.",
"Aside from capturing valuable knowledge from humankind's collective research efforts, academic papers exhibit many interesting characteristics thousands of words organized into sections, objects such as tables, figures and equations, frequent inline references to these objects, footnotes, other papers, and more.",
"Different types of resources have been used to support research over academic papers.",
"Citation graphs like AMiner's Open Academic Graph (Tang et al., 2008), the Microsoft Academic Graph (MAG) (Shen et al., 2018), and the Semantic Scholar literature graph (Ammar et al., 2018), have had widespread application in bibliomet-rics, science-of-science, information retrieval, and network analysis.",
"Digital archives like arXiv, 2 denotes equal contribution 1 Instructions for access to the data and model are available at https://github.com/allenai/s2orc/ .",
"2 https://arxiv.org Figure 1: Inline citations and references to figures and tables are annotated in S2ORC's structured full text.",
"PubMed Central, 3 CiteSeerX (Giles et al., 1998), 4 and the ACL Anthology (Bird et al., 2008), 5 are popular resources for deriving large text corpora for summarization and language modeling or, with further annotation, development of datasets for tasks like entity extraction, text classification, parsing, and discourse analysis.",
"We focus on bibliometrically-enhanced derivations of these corpora, such as the ACL Anthology Network (AAN) (Radev et al., 2009) 6 derived from the ACL Anthology, RefSeer (Huang et al., 2015) derived from CiteSeerX, and Saier and Farber (2019) derived from arXiv, which combine useful aspects of citation graphs and raw text corpora.",
"These resources provide citation mentions linked to paper identifiers in their corresponding digital archives, such as the ACL Anthology and CiteSeerX, or to nodes in citation graphs such as MAG, enabling new forms of cross-paper discourse analysis (e.g., studying how or why papers are related).",
"3 https://www.ncbi.nlm.nih.gov/pmc 4 https://citeseerx.ist.psu.edu 5 https://www.aclweb.org/anthology 6 http://aan.how/ Corpus Papers w/ body text Citation contexts References to tables / figures / equations Linked to graph Academic disciplines S2ORC (PDF-parse) 8.1M full text yes S2ORC (full) multi S2ORC (LATE X-parse) 1.5M full text yes S2ORC (full) physics, math, CS PubMed Central (OA) 2.6M full text yes PubMed bio, med AAN (Radev et al., 2009) 25k full text no ACL Anthology comp ling Saier and Farber (2019) 1.0M snippets no MAG physics, math, CS RefSeer (Huang et al., 2015) 1.0M snippets no CiteSeerX multi Table 1: A comparison of S2ORC with other publicly-available academic text corpora.",
"Yet, existing corpora are not without their limitations.",
"Some cover a small number of papers (e.g. AAN), are domain-specific (e.g. AAN, PubMed Central, Saier and Farber (2019)), or may not provide usable full text (e.g. Saier and Farber (2019) and RefSeer).",
"To address these issues, we introduce S2ORC, 7 the Semantic Scholar 8 Open Research Corpus, a large publicly-available collection of 81.1M academic papers covering dozens of academic disciplines.",
"Each paper is associated with metadata and abstracts aggregated from hundreds of trusted sources such as academic publishers and literature archives like PubMed and arXiv.",
"Notably, we release structured, machine-readable full text extracted from PDFs for 8.1M papers which we've identified as having open access status.",
"S2ORC full text preserves meaningful structure, e.g., paragraph breaks, section headers, inline citation mentions, references to tables and figures, and resolved citation links to other papers.",
"Additionally, we provide 1.5M full text LATEX parses from which we have extracted, in addition to citations and references, the source text of tables and mathematical formulas.",
"As shown in Table 1, S2ORC provides substantially more structured full text papers and covers a more diverse set of academic disciplines than other resources.",
"pronounced stork 8 The papers included in S2ORC are a curated subset of the papers in the Semantic Scholar literature graph (Ammar et al., 2018) that focuses only on English-language papers with abstracts or full text available.",
"See 2.5 for details on filtering through Semantic Scholar papers.",
"In this paper, we describe the construction of S2ORC (2).",
"We provide summary statistics of the corpus (3) and evaluate the data quality (4).",
"We then evaluate a BERT model pretrained on S2ORC (5), and discuss potential applications to a variety of NLP and analysis tasks over academic text (6).",
"Finally, we compare S2ORC with other publicly-available academic text corpora (7).",
"S2ORC is constructed using data from the Semantic Scholar literature corpus (Ammar et al., 2018).",
"Papers in Semantic Scholar are derived from numerous sources: obtained directly from publishers, from resources such as MAG, from various archives such as arXiv or PubMed, or crawled from the open Internet.",
"Semantic Scholar clusters these papers based on title similarity and DOI overlap, resulting in an initial set of approximately 200M paper clusters.",
"To construct S2ORC, we must overcome challenges in",
"(i) paper metadata aggregation,",
"(ii) identifying open access publications, and",
"(iii) clustering papers, in addition to identifying, extracting, and cleaning the full text and bibliometric annotations associated with each paper.",
"The pipeline for creating S2ORC is: 1) Process PDFs and LATEX sources to derive metadata, clean full text, inline citations and references, and bibliography entries, 2) Select the best metadata and full text parses for each paper cluster, 3) Filter paper clusters with insufficient metadata or content, and 4) Resolve bibliography links between paper clusters in the corpus.",
"Details for these steps are provided below.",
"See Appendix A for definitions of terminology.",
"The output of this pipeline is visualized in Figure 1. 2.1 Processing PDFs We process PDFs from the Semantic Scholar corpus using SCIENCEPARSE v3.0.0 9 and GROBID v0.5.5 10 (Lopez, 2009).",
"Our processing pipeline is described below.",
"Selecting PDFs We remove PDFs which are less likely to be academic papers.",
"SCIENCEPARSE and GROBID are not optimized for processing non-paper academic documents such as dissertations, reports, slides, etc., and this filtering step is necessary to increase output data quality.",
"See Appendix B for filter details.",
"There are around 31.3M PDFs associated with approximately 200M initial paper clusters, and 30.5M PDFs are selected for processing based on these filtering criteria.",
"Extracting structured data from PDFs We use SCIENCEPARSE to extract title and authors from each PDF.",
"11 We then use GROBID to process each PDF.",
"From the XML output of GROBID , we extract",
"(i) metadata such as title, authors, and abstract,",
"(ii) paragraphs from the body text organized under section headings,",
"(iii) figure and table captions,",
"(iv) equations, table content, headers, and footers, which we remove from the body text,",
"(v) inline citations in the abstract and body text,",
"(vi) parsed bibliography entries with title, authors, year, and venue identified, and",
"(vi) links between inline citation mentions and their corresponding bibliography entries.",
"Postprocessing GROBID output We postprocess GROBID output using regular expressions to classify the parenthetical citation style of a paper as BRACKET (e.g. [2]), NAME-YEAR (e.g. ABC, 2019), or OTHER (superscripts and other mixed styles).",
"We focus on addressing two types of common errors in GROBID 's inline citation extractions:",
"(i) false positives resulting from superscripts or equation references being recognized as 9 https://github.com/allenai/science-parse 10 https://github.com/kermitt2/grobid 11 Our evaluations suggest SCIENCEPARSE outperforms GROBID for title and author extraction.",
"inline citations in papers with BRACKET -style citations, and",
"(ii) false negatives resulting from an inability to expand bracket citation ranges (e.g. [3]-[5] should be expanded to [3], [4], [5] before linking).",
"False positives are detected using regular expressions and removed from GROBID output.",
"Bracket citation ranges are manually expanded and linked to their corresponding bibliography entries.",
"The resulting parses are expressed in JSON format.",
"12 2.2 Processing LATEX source LATEX document source is available for a majority of arXiv submissions, and where available, are used to construct a full text parse.",
"We retrieve body text, section headers, figure/table captions, table representations, equations, and inline citations and references directly from LATEX source.",
"Inspired by Saier and Farber (2019), we first convert LATEX source into XML documents and then extract structured information from the XML.",
"Due to direct access to source, the accuracy of citation span, reference, caption, section header, and equation detection is near-perfect.",
"We process 1.5M papers from LATEX source derived from arXiv, all of which are included as part of S2ORC.",
"Surprisingly, due to the diversity of ways in which authors define metadata in LATEX, the quality of metadata extracted from LATEX documents is worse than those extracted from PDF.",
"Therefore, we do not use LATE X-derived metadata for paper clustering or metadata selection.",
"Canonical values for title, authors and other metadata fields are selected from among the papers in a cluster.",
"First, if a cluster contains multiple PDFs, we select one to be canonical.",
"This can occur, for example, in a cluster containing an arXiv preprint and its eventual camera-ready version.",
"We preferentially select PDFs from open access sources and break ties by prioritizing PDFs for which there exist richer publisher-provided metadata (e.g. abstract, year, venue, DOI).",
"If the selected PDF is associated with publisher-provided metadata, we select those publisher-provided metadata fields to be canonical.",
"In cases where publisher-provided metadata is incomplete, we use majority voting to select 12 The S2ORC data format is described at https:// github.com/allenai/s2orc canonical metadata values.",
"We break ties by minimizing the total number of sources from which we select metadata (e.g., if IEEE provides title, authors and abstract, DBLP provides title and authors, and arXiv provides title and abstract, we prioritize selecting IEEE over the union of DBLP and arXiv).",
"S2ORC metadata fields include title, author, year, venue, journal, abstract, and identifiers (DOI, PubMed, PubMed Central (PMC), arXiv, and ACL Anthology).",
"In cases where the title and authors are not provided by any publishers, we derive the values for these fields from the parsed PDF, prioritizing SCIENCEPARSE over GROBID .",
"We further comment on paper clustering as it pertains to metadata selection in Appendix C.",
"We construct the final corpus by assembling clustered paper metadata with GROBID and LATEX parse objects.",
"We associate the GROBID parse with the S2ORC paper object if a valid GROBID parse is produced from the PDF, and the PDF is open access.",
"Open access status is assigned if a paper is derived from arXiv, ACL Anthology, PubMed Central (OA), and/or associated with an open-access DOI in the Unpaywall database.",
"13 If the PDF is not open access, we only include the bibliography from the GROBID parse in S2ORC.",
"If arXiv LATEX source is available for the paper cluster, we also associate the LATEX parse with the S2ORC paper object.",
"We further filter paper clusters to remove papers with",
"(i) no title,",
"(ii) no authors,",
"(iii) fewer than 100 characters of abstract and body text, and",
"(iv) where English is not the primary language.",
"The first three filters remove papers that provide little value for bibliometric-based or text-based analyses.",
"The English language filter 14 reduces GROBID parsing errors.",
"All filters are applied in series.",
"Subsequently, 95.5M paper clusters are filtered out based on the aforementioned criteria and removed from the corpus.",
"The distribution of filtered papers is given in Table 2. We note that a large number of paper clusters are filtered out; 80.0M of these filtered clusters have no associated publisher-provided abstract or associated PDF and 13 Unpaywall 2019-04-19 data dump 14 We use the cld2 tool for language detection with a threshold of 0.9 over the English language score.",
"do not provide significant value to our dataset in their current state.",
"Although these papers that lack text may be useful as cite-able nodes in S2ORC, they are generally of lower quality and are filtered out of the corpus to improve corpus quality.",
"Each bibliography entry in both GROBID and LATEX parses are linked to the most similar papers in the corpus.",
"For linking, we score each bibliography entry and paper cluster pair using a similarity score computed between their titles.",
"Each title is first normalized (i.e. white spaces stripped, lower-cased, special characters removed) and represented by its character 3-grams.",
"The similarity score S title is computed as the harmonic mean between a Jaccard index and a containment metric: S title = 2 J C J + C (1) where the Jaccard index J and containment metric C are computed from the n -grams of the two titles N 1 and N 2 as: J = | N 1 N 2 | | N 1 N 2 | C = | N 1 N 2 | min ( | N 1 | , | N 2 | ) For each bibliography entry, the bibliography-paper pair with the highest similarity score above 0.8 is output as the correct link.",
"Otherwise, the bibliography entry remains unlinked.",
"We perform an evaluation of linking performance in 4.",
"The resulting corpus consists of 81.1M papers.",
"Our publisher-provided abstract coverage is 90.4%, or 73.4M papers.",
"Our PDF coverage is 35.6%, or 28.9M papers.",
"These PDFs are processed using the pipeline discussed in 2.1.",
"The Total papers 81.1M Papers w/ PDF 28.9M (35.6%) Papers w/ bibliographies 27.6M (34.1%) Papers w/ GROBID full text 8.1M (10.0%) Papers w/ LaTeX full text 1.5M (1.8%) Papers w/ publisher abstract 73.4M (90.4%) Papers w/ DOIs 52.2M (64.3%) Papers w/ Pubmed IDs 21.5M (26.5%) Papers w/ PMC IDs 4.7M (5.8%) Papers w/ ArXiv IDs 1.7M (2.0%) Papers w/ ACL IDs 42k (0.1%) Table 3: Statistics on paper provenance.",
"vast majority of these PDFs are successfully processed using GROBID , and we extract bibliography entries for 27.6M of the 28.9M PDFs.",
"We identify 8.1M of the 28.9M PDFs as open access (2.4), and we provide full text for all papers in this open access subset.",
"For the 1.5M papers for which LATEX source is available through arXiv, we further obtain and provide LATEX parses (2.2).",
"Using these extracted bibliographies, we resolve a total 380.5M citation links between papers (2.6), 156.5M of which can be tied back to their inline citation mentions in the full text.",
"See Table 3 for more provenance statistics.",
"We provide statistics for the GROBID and LATEX full text parses and bibliography linking in Figure 2: Distribution of papers by Microsoft Academic field of study.",
"Table 4.",
"On average, LATEX parses contain many more paragraphs of body text, because LATEX source files preserve line breaks rather than paragraph breaks.",
"We speculate that differences in bibliography entry and linking counts between the GROBID and LATEX parses are due to a combination of:",
"(i) challenges in LATEX bibliography expansion and parsing, and",
"(ii) differences in bibliography formatting in some math and physics venues (where bibliography entries do not include paper titles, which we depend on for bibliography linking).",
"The distribution of academic disciplines in S2ORC is given in Figure 2 using Microsoft Academic fields of study.",
"Not all papers in S2ORC can be found in Microsoft Academic those not found are denoted as Unclassified .",
"Approximately 677k papers have more than one primary Microsoft Academic field of study; Figure 2 represents only the top field of study for each paper.",
"To evaluate the quality of our metadata selection, we randomly sample 500 paper clusters, restricting to those with PDFs.",
"Within each sampled cluster, we determine whether the canonical title and authors match the title and authors in the selected canonical PDF.",
"Inline citation detection and bibliography parsing are dependent on GROBID (Lopez, 2009).",
"Ahmad and Afzal (2018) evaluate GROBID for de-Domain Dataset Reference Task SCIBERT S2ORC-S CIBERT BC5CDR Li et al. (2016) NER 90.01 90.41 0.06 JNLPBA Collier and Kim (2004) NER 77.28 77.70 0.25 NCBI-disease Dogan et al. (2014) NER 88.57 88.70 0.52 Biomed EBM-NLP Nye et al. (2018) PICO 72.28 72.35 0.95 GENIA Kim et al. (2003) DEP (LAS) 90.43 90.80 0.19 GENIA Kim et al. (2003) DEP (UAS) 91.99 92.31 0.18 ChemProt Krallinger et al. (2017) REL 83.64 84.59 0.93 SciERC Luan et al. (2018) NER 67.57 68.93 0.19 CS SciERC Luan et al. (2018) REL 79.97 81.77 1.64 ACL-ARC Jurgens et al. (2018) CLS 70.98 68.45 2.47 Biomed & CS SciCite Cohan et al. (2019) CLS 85.49 84.76 0.37 Multi-domain PaperField Beltagy et al. (2019) CLS 65.71 65.99 0.08 Table 5: S2ORC-S CIBERT test results are comparable with reported SCIBERT test results on the set of tasks and datasets from Beltagy et al. (2019), to which we refer the reader for descriptions.",
"tecting inline citations using a corpus of 5k Cite-Seer papers, and found GROBID to have an F1-score of 0.89 on this task.",
"Tkaczyk et al. (2018) report GROBID as the best among 10 out-of-the-box tools for parsing bibliographies, also achieving an F1 of 0.89 in an evaluation corpus of 9.5k papers.",
"We perform an evaluation over 200 randomly sampled papers from S2ORC and found comparable F1-scores for GROBID performance on both tasks.",
"For bibliography linking, we randomly sample S2ORC papers (500 GROBIDPDF parses and 100 LATEX parses) and select one linked bibliography entry from each sampled paper (while avoiding selecting multiple entries linked to the same paper).",
"We determine whether the title and authors in the bibliography entry agree with the title and authors of the linked paper.",
"To demonstrate the suitability of S2ORC for language model pretraining, we train BERT-Base (Devlin et al., 2019) on the parsed full text of S2ORC and show that the resulting model (S2ORC-S CIBERT) performs similarly to SCIBERT (Beltagy et al., 2019) on a diverse suite of scientific NLP tasks and datasets.",
"While SCIBERT is a BERT-Base model also trained on multiple domains of scientific text, key differences in its pretraining corpus and vocabulary and those used for S2ORC-S CIBERT are: Domain: Beltagy et al. (2019) report a pretraining corpus consisting of 82% biomedical and 18% computer science papers.",
"Our S2ORC pretraining corpus consists of a more balanced distribution of papers across diverse academic disciplines (see Figure 2), such that biomedical (42.7%) and computer science (7.2%) papers only comprise half the corpus.",
"Preprocessing: S2ORC identifies figure captions, table text and captions, headers, footers, and footnotes.",
"We exclude these from the pretraining corpus.",
"We tokenize and sentencize the text using scispaCy (Neumann et al., 2019).",
"We also use heuristic filters to remove ill-formed paragraphs (such as those containing too many symbols).",
"Size: The resulting S2ORC pretraining corpus contains 16.4B tokens, nearly five times larger than the corpus for SCIBERT.",
"Vocab: Following Beltagy et al. (2019), we construct a cased WordPiece (Wu et al., 2016) vocabulary of size 31k using 15% of the S2ORC pretraining corpus.",
"The Jaccard index between the S2ORC-S CIBERT and SCIBERT vocabularies is 0.536.",
"We follow a similar setup to Beltagy et al. (2019) for both pretraining and fine-tuning S2ORC-S CIBERT.",
"Like SCIBERT, S2ORC-S CIBERT is pretrained from scratch using the original BERT code 15 and default BERT-Base configurations on a single TPU v3-8 for one week.",
"Also like SCIBERT, S2ORC-S CIBERT is fine-tuned on all tasks by optimizing a cross entropy loss using Adam (Kingma and Ba, 2014), a linear learning rate decay with 10% warm-up, batch size of 32, and dropout of 0.1.",
"We search over an equal-sized grid of hyperpa-rameters as Beltagy et al. (2019).",
"We fine-tune for 1 to 4 epochs with a maximum learning rate of 1e-5, 2e-5, 3e-5, or 5e-5.",
"For each task, we select the optimal combination of these two hyperparam-eters using the development set and report the corresponding test set results.",
"For details, we refer the reader to SCIBERT code, 16 which we use for all experiments.",
"The results in Table 5 show that S2ORC-S CIBERT outperforms SCIBERT on many tasks despite including a large percentage of data outside of the biomedical and computer science domains.",
"As the pretraining corpus for SCIBERT is not publicly-available, S2ORC can serve as a large pretraining corpus for evaluating and comparing pretraining approaches on academic text.",
"We also release S2ORC-S CIBERT to serve as a baseline for research.",
"S2ORC can be used for many NLP and analysis tasks over academic text.",
"We give a summary of potential applications below.",
"The combination of structured full text annotated with linked inline citations makes S2ORC well-suited for a variety of citation-related text-based tasks.",
"Without any additional supervision, S2ORC can be used directly for both inline (He 15 https://github.com/google-research/ bert 16 https://github.com/allenai/scibert et al., 2010; Duma and Klein, 2014; Jeong et al., 2019) and document-level (Yu et al., 2012; Liu et al., 2015; Bhagavatula et al., 2018) citation recommendation.",
"Among document-level recommenders, S2ORC is well-suited to the setting of Liu et al. (2015), who use inline citation contexts to filter document-level recommendations.",
"include classifying citation intent (Teufel et al., 2006; Jurgens et al., 2018; Cohan et al., 2019), identifying citation sentiment (Athar and Teufel, 2012), identifying meaningful citations (Valen-zuela et al., 2015), extracting key phrases (Caragea et al., 2014), and citation context-based paper summarization (Teufel et al., 2006; Qazvinian and Radev, 2008; Cohan and Goharian, 2015; Mitrovic and Muller, 2015).",
"The models in these papers require labeled citation contexts for training.",
"S2ORC could potentially benefit task performance without additional annotation, for example, by pretraining language models on S2ORC citation contexts before fine-tuning to these tasks.",
"Cohan et al. (2019) find that long citation contexts (beyond sentence boundary) are important for tasks like summarization; the wider citation contexts available in S2ORC could be used to augment existing datasets for document-level tasks.",
"Citation contexts can also be used for the more general tasks of identifying similar papers (Kanakia et al., 2019; Eto, 2019; Haruna et al., 2018; Small, 1973) or bibliometric analysis (Ding et al., 2014; Trujillo and Long, 2018; Asatani et al., 2018).",
"Towards these tasks, the citation contexts in S2ORC can provide insight into how and why papers are cited.",
"We illustrate this by following Berger et al. (2016) in training a word2vec skip-gram model (Mikolov et al., 2013) using full text citation contexts in S2ORC, where each inline citation span is replaced with its linked paper identifier.",
"When training over this modified text, the word2vec model learns embeddings corresponding to each unique paper identifier, which can be leveraged as paper embeddings.",
"The resulting embeddings shown in Figure 3 and Table 7 form clusters corresponding closely to arXiv Machine Learning categories.",
"Upon inspection, papers of different categories in the same embedding sub-region share research themes (see Table 7), indicating that these paper embeddings trained from citation contexts capture coherent topic similarity and relatedness.",
"These paper embeddings can be used to identify similar papers, using the similarity between two papers' citing contexts as a proxy for paper similarity.",
"The LATEX subset of S2ORC also provides unique opportunities for research.",
"In addition to citations and references, we also extract and parse tables from LATEX source into a structured format.",
"There is an opportunity to use these tables for corpus-level results extraction and aggregation.",
"The LATEX subset also has fine-grained extraction and labeling of mathematical formulas, which can be used to understand proof construction, or to assist in symbol co-reference resolution.",
"The ACL Anthology Network (AAN) (Radev et al., 2009) is a bibliometric-enhanced corpus covering papers in the field of computational linguistics.",
"It is built from the ACL Anthology (Bird et al., 2008) and consists of 24.6k papers manually augmented with citation information.",
"The PubMed Central Open Access corpus is a large corpus of 2.6M papers in the biomedical domain with citations linked to PubMed identifiers.",
"17 CiteSeerX (Giles et al., 1998), consists of papers collected primarily via web crawl, without integrating metadata provided by sources outside of the PDF.",
"Although citation contexts are no longer available through CiteSeerX, the RefSeer dataset (Huang et al., 2015) 18 is a dataset of short citation context snippets derived from 1.0M papers from CiteSeerX.",
"More recently, Saier and Farber (2019) introduce a corpus built using 1.0M arXiv publications.",
"They use LATEX source to extract text, citation spans and bibliography entries, which are linked to papers in the Microsoft Academic Graph.",
"The citation context they provide are extracted snippets and no bibliography parses are provided.",
"An updated version of this dataset (Saier and Farber, 2020) released concurrently with this work now includes full text.",
"Compared with these resources, S2ORC represents a significantly larger dataset of linked papers covering broad domains of science by leveraging PDF parsing in addition to LATEX source.",
"S2ORC also provides clean full text for text mining and NLP needs with additional enhancements such as annotations of table and figure references and captions.",
"S2ORC's wealth of metadata and structured text allows it to be flexibly adapted to a variety of downstream tasks.",
"We introduce S2ORC, the largest publicly-available corpus of English-language academic papers covering dozens of academic disciplines.",
"S2ORC consists of 81.1M papers, 380.5M resolved citation links, and structured full text from 8.1M open-access PDFs and 1.5M LATEX source files.",
"We aggregate metadata and abstracts from hundreds of trusted sources.",
"Full text is augmented with sections, citation mentions, and references to tables and figures.",
"We demonstrate that S2ORC can be used effectively for downstream NLP tasks in academic paper analysis.",
"The pipeline for creating S2ORC was used to construct the CORD-19 corpus (Wang et al., 2020), which saw fervent adoption as the canonical resource for COVID-19 text mining.",
"CORD-19 is aimed at assisting biomedical experts and policy makers process large amounts of COVID-19 literature in the search for effective treatments and management policies.",
"With over 75K dataset downloads, dozens of search and question-answering systems, and hundreds of participating teams across two shared tasks 19 in the first month of its release, there is little doubt of the resource's impact.",
"Our hope with the release of S2ORC is to ensure such text mining resources are available to researchers even beyond periods of global crisis.",
"We thank Doug Downey, Oren Etzioni, Andrew Head, and Bryan Newbold for their valuable feedback on the manuscript.",
"We also thank Isabel Ca-chola, Dallas Card, Mike D'Arcy, Suchin Guru-rangan, Daniel King, Rik Koncel-Kedziorski, Susan Liu, Kelvin Luu, Noah Smith, Gabi Stanovsky, and Dave Wadden for feedback on the dataset during early development.",
"Finally, we thank the Semantic Scholar team for assisting with data access and system infrastructure."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Identifying controversial posts on social media is a fundamental task for mining public sentiment, assessing the influence of events, and alleviating the polarized views.",
"However, existing methods fail to 1) effectively incorporate the semantic information from content-related posts; 2) preserve the structural information for reply relationship modeling; 3) properly handle posts from topics dissimilar to those in the training set.",
"To overcome the first two limitations, we propose T opic-Post-Comment Graph Convolutional Network (TPC-GCN), which integrates the information from the graph structure and content of topics, posts, and comments for post-level controversy detection.",
"As to the third limitation, we extend our model to Disentangled TPC-GCN (DTPC-GCN), to disentangle topic-related and topic-unrelated features and then fuse dynamically.",
"Extensive experiments on two real-world datasets demonstrate that our models outperform existing methods.",
"Analysis of the results and cases proves that our models can integrate both semantic and structural information with significant generalizability.",
"Social media such as Reddit 1 and Chinese Weibo 2 has been the major channel through which people can easily propagate their views.",
"In the open and free circumstance, the views expressed by the posts often spark fierce discussion and raise controversy among the engaging users.",
"These controversial posts provide a lens of public sentiment, which bring about several tasks such as news topic selection, influence assessment (Hessel and Lee, 2019), and alleviation of polarized views (Garimella et al., 2017).",
"As a basis of all mentioned tasks, automatically identifying the controversial posts has Corresponding author.",
"attracted wide attention (Addawood et al., 2017; Coletto et al., 2017; Rethmeier et al., 2018; Hessel and Lee, 2019).",
"This work focuses on post-level controversy detection on social media, i.e., to classify if a post is controversial or non-controversial.",
"According to (Coletto et al., 2017), a controversial post has debatable content and expresses an idea or an opinion which generates an argument in the responses, representing opposing opinions in favor or in disagreement with the post.",
"In practice, the responses of a target post (the post to be judged) generally come from two sources, i.e., the comments attached to the post and other content-related posts.",
"Figure 1 shows an example where the target post P expresses that Xiaomi's Mimoji do not copy Apple's Memoji.",
"We can see that: 1) The comments show more supports and fewer refutes to P , which raises a small controversy.",
"However, the related posts show extra refutations and enhance the controversy of P .",
"2) C 3 1 expresses refutation literally, but it actually supports P because in the comment tree, it refutes C 3 , a refuting comment to P .",
"3) There exist two kinds of semantic clues for detection, topic-related and topic-unrelated clues.",
"For example, support and against is unrelated to this topic, while copy and similar are topic-related.",
"Topic-related clues can help identify posts in a similar topic, but how effective they are for those in dissimilar topics depends on the specific situation.",
"Therefore, to comprehensively evaluate the controversy of a post, the information from both the comments and related posts should be integrated properly on semantic and structure level.",
"Existing methods detecting controversy on social media have exploited the semantic feature of the target post and its comments as well as structural feature.",
"However, three drawbacks limit their performance: 1) These methods ignore the role of the related posts in the same topic in providing extra supports or refutations on the target post.",
"Only exploiting the information from comments is insufficient.",
"2) These methods use statistical structure-based features which cannot model the reply-structure relationships (like P C 1 and C 3 C 3 1 in Figure 1).",
"The stances of some comments may be misunderstood by the model (like C 3 1 ).",
"3) These methods tend to capture topic-related features that are not shared among different topics with directly using information of content (Wang et al., 2018).",
"The topic-related features can be helpful when the testing post is from a topic similar to those in the training set but would hurt the detection otherwise.",
"Recently, graph convolutional networks have achieved great success in many areas (Marcheg-giani et al., 2018; Ying et al., 2018; Yao et al., 2019; Li and Goldwasser, 2019) due to its ability to encode both local graph structure and features of node (Kipf and Welling, 2017).",
"To overcome the first two drawbacks of existing works, we propose a Topic-Post-Comment Graph Convolutional Network (TPC-GCN) (see Figure 2a) that integrates the information from the graph structure and content of topics, posts, and comments for post-level controversy detection.",
"First, we create a TPC graph to describe the relationship among topics, posts, and comments.",
"To preserve the reply-structure information, we connect each comment node with the post/comment node it replies to.",
"To include the information from related posts, we connect each post node with its topic node.",
"Then, a GCN model is applied to learn node representation with content and reply-structure information fused.",
"Finally, the updated vectors of a post and its comments are fused to predict the controversy.",
"TPC-GCN is mainly for detection in intra-topic mode, i.e., topics of testing posts appear in the training set, for it cannot overcome the third drawback.",
"We thus extend a two-branch version of TPC-GCN named Disentangled TPC-GCN (DTPC-GCN) (see Figure 2b) for inter-topic mode (no testing posts are from the topics in the training set).",
"We use a TPC-GCN in each branch, but add an auxiliary task, topic classification.",
"The goals of the two branches for the auxiliary task are opposite to disentangle the topic-related and topic-unrelated features.",
"The disentangled features can be dynamically fused according to the content of test samples with attention mechanism for final decision.",
"Extensive experiments demonstrate that our models outperform existing methods and can exploit features dynamically and effectively.",
"The main contributions of this paper are as follows: 1. We propose two novel GCN-based models, TPC-GCN and DTPC-GCN, for post-level controversy detection.",
"The models can integrate the information from the structure and content of topics, posts, and comments, especially the information from the related posts and reply tree.",
"Specially, DTPC-GCN can further disentangle the topic-related features and topic-unrelated features for inter-topic detection.",
"2. We build a Chinese dataset for controversy detection, consisting of 5,676 posts collected from Chinese Weibo, each of which are manually labeled as controversial or noncontroversial.",
"To the best of our knowledge, this is the first released Chinese dataset for controversy detection.",
"3. Experiments on two real-world datasets demonstrate that the proposed models can effectively identify the controversial posts and outperform existing methods in terms of performance and generalization.",
"Controversy detection on the Internet have been studied on both web pages and social media.",
"Existing works detecting controversy on web pages mostly aims at identifying controversial articles in Figure 2: Architecture of",
"Wikipedia.",
"Early methods are mainly based on statistical features, such as revision times (Kittur et al., 2007), edit history (Vuong et al., 2008; Yasseri et al., 2012; Rad and Barbosa, 2012) and dispute tag (Dori-Hacohen and Allan, 2015).",
"Others incorporate the collaboration-network-based features, sentiment-based features (Vuong et al., 2008; Wang and Cardie, 2014), and semantic features (Linmans et al., 2018).",
"As to the common web pages, existing works exploit the controversy on Wikipedia (Awadallah et al., 2012; Dori-Hacohen and Allan, 2013, 2015; Jang et al., 2016) and user comments (Choi et al., 2010; Tsytsarau et al., 2010) for detection.",
"Unlike the web pages, social media contains more diverse topics and more fierce discussion among users, which makes controversy detection on social media more challenging.",
"Early studies assume that a topic has its intrinsic controversy, and focus on topic-level controversy detection.",
"Popescu and Pennacchiotti (2010) detect controversial snapshots (consisting of many tweets referring to a topic) based on Twitter-based and external-knowledge features.",
"Garimella et al. (2018) build graphs based on a Twitter topic, such as retweeting graph and following graph, and then apply graph partitioning to measure the extent of controversy.",
"However, topic-level detection is rough, because there exists non-controversial posts in a controversial topic and vice versa.",
"Recent works focus on post-level controversy detection by leveraging language features, such as emotional and topic-related phrases (Rethmeier et al., 2018), emphatic features, Twitter-specific features (Addawood et al., 2017).",
"Other graph-based methods exploit the features from the following graph and comment tree (Coletto et al., 2017; Hessel and Lee, 2019).",
"The limitations of current post-level works are that they do not effectively integrate the information from content and reply-structure, and ignore the role of posts in the same topic.",
"Moreover, the difference between intra-topic and inter-topic mode is not realized.",
"Only Hessel and Lee (2019) deal with topic transfer, but they train on each topic and test on others to explore the transferability, which is not suitable in practice.",
"In this section, we introduce the Topic-Post-Comment Graph Convolutional Network (TPC-GCN) and its extension Disentangled TPC-GCN (DTPC-GCN), as shown in Figure 2. We first introduce the TPC graph construction and then detail the two models.",
"To model the paths of message passing among topics, posts, and comments, we first construct a topic-post-comment graph G = ( V, E ) for target posts, where V and E denote the set of nodes and edges respectively.",
"First, to preserve the post-comment and inter-comment relationship, we incorporate the comment tree, each comment node of which is connected with the post/comment node it replies to.",
"Then, to facilitate the posts capturing information from related posts in the same topic that proved helpful in Section 1, we connect each post with its topic.",
"The topic node can be regarded as a hub node to integrate and interchange the information.",
"Another way is to connect post nodes in a topic pairwise, but the complexity will be high.",
"Note that the concept topic here is not necessarily provided by the platform, such as the subreddit on Reddit and the hashtag (#) on Weibo.",
"When topics are not provided, algorithms for text-based clustering can be used to construct a topic with related posts (Nematzadeh et al., 2019).",
"In G , each node may represent a topic, a post, or a comment and each edge may represent topic-post, post-comment, or comment-comment connection.",
"We initially represent each node v with an embedding vector x of their text by using the pre-trained language model.",
"The GCN has been proved an efficient neural network that operates on a graph to encode both local graph structure and features of node (Kipf and Welling, 2017).",
"The characteristic of GCN is consistent to our goal that integrates the semantic and structural information.",
"In a GCN, each node is updated according to the aggregated information of its neighbor nodes and itself, so the learned representation can include information from both content and structure.",
"For a node v i V , the update rule in the message passing process is as follows: h ( l +1) i = (cid:88) j N i g (cid:16) h ( l ) i , h ( l ) j (cid:17) + b ( l ) (1) where h ( l ) i is the hidden state of node v i in the l th layer of a GCN and N i is the neighbor set of node v i with itself included.",
"Incoming messages from N i are transformed by the function g and then pass through the activation function (such as ReLU ) to output new representation for each node.",
"b ( l ) is the bias term.",
"Following Kipf and Welling (2017), we use a linear transform function g ( h ( l ) i , h ( l ) j ) = W ( l ) h j , where W ( l ) is a learnable weight matrix.",
"Based on node-wise Equation 1, layer-wise propagation rule can be written as the following form: H ( l +1) = (cid:16) AH ( l ) W ( l ) + B ( l ) (cid:17) (2) where H ( l ) contains all node vectors in the l -th layer and A is the normalized adjacency matrix with inserted self-loops.",
"W ( l ) is the weight matrix and B ( l ) is the broadcast bias term.",
"In TPC-GCN (see Figure 2a), we input the matrix consisting of N d -dimensional embedding vectors H (0) = X RN d to a two-layer GCN to obtain the representation after message passing H (2) .",
"Next, the vector of each post node i and its attached comment nodes are averaged to be the fusion vector f i of the post.",
"Finally, we apply a softmax function to the fusion vectors for the controversy probability of each post.",
"The cross entropy is the loss function: L c = 1 N (cid:88) i ((1 y ci )log(1 p ci )+ y ci log( p ci )) (3) where y ci is a label with 1 representing controversial and 0 representing the non-controversial , p ci is the predicted probability that the i -th post is controversial, and N is the size of training set.",
"The limit of TPC-GCN is that the representation tends to be topic-related as Section 1 said.",
"The limited generalizability of TPC-GCN makes it more suitable for intra-topic detection, instead of inter-topic detection.",
"Intuitively, topic-unrelated features are more effective when testing on the posts from unknown topics (inter-topic detection).",
"However, topic-related features can help when unknown topics are similar to the topics in the training set.",
"Therefore, both of topic-related and topic-unrelated features are useful, but their weights vary from sample to sample.",
"This indicates that the two kinds of features should be disentangled and then dynamically fused.",
"Based on the above analysis, we propose the extension of TPC-GCN, Disentangled TPC-GCN (see Figure 2b), for inter-topic detection.",
"DTPC-GCN consists of two parts: the two-branch multi-task architecture for disentanglement, and attention mechanism for dynamic fusion.",
"Two-branch Multi-task Architecture To obtain the topic-related and topic-unrelated features at the same time, we use two branches of TPC-GCN with multi-task architecture, denoted as R for topic-related branch and U for topic-unrelated one.",
"In both R and U , an auxiliary task, topic classification, is introduced to guide the learning of representation oriented by the topic.",
"For each branch, we first train the first layer of GCN with the topic classification task.",
"The input of the topic classifier is fusion vectors from H (1) which are obtained with the same process of f i in TPC-GCN.",
"The cross entropy is used as the loss function: L t = 1 N (cid:88) k (cid:88) i y tik log( p tik ) (4) where y tik is a label with 1 representing the ground-truth topic and 0 representing the incorrect topic class, p tik is the predicted probability of the i -th post belonging to the k -th topic, and N is the size of training set.",
"The difference between R and U is that we minimize L t in Branch R to obtain topic-distinctive features, but maximize L t in Branch U to obtain topic-confusing features.",
"Then we include the second layer of GCN and train on two tasks, i.e., topic and controversy classification, for each branch individually.",
"Branch U and R are expected to evaluate controversy effectively with different features in terms of the relationship with the topics.",
"Attention Mechanism After the individual training, Branch U and R are expected to capture the topic-related and topic-unrelated features respectively.",
"We further fuse the features from the two branches dynamically.",
"Specifically, we freeze the parameters of U and R , and further train the dynamic fusion component.",
"For the weighted combination of fusion vectors f U and f R from the two branches, we use the attention mechanism as follows: F ( f b ) = v T tanh( WF f b + b F ) , b { U, R } (5) b = exp( F ( f b )) (cid:80) b { U,R } exp( F ( f b )) (6) u = (cid:88) b { U,R } b f b (7) Number Weibo Reddit Topics(Hashtags/Subreddits) 49 6 Controversial Posts 1,992 7,515 Non-controversial Posts 3,684 7,518 All Posts 5,676 15,033 Comments of Controversial Posts 35,632 578,879 Comments of Non-Controversial Posts 34,565 1,461,697 All Comments 70,197 2,040,576 Table 1: Statistics of two datasets.",
"where WF is the weight matrix and b F is the bias term.",
"v T is a transposed weight vector and F ( ) outputs the score of the input vector.",
"The scores of features from Branch U and R are normalized via a softmax function as the branch weight.",
"The weighted sum of the two fusion vectors u is finally used for controversy classification.",
"The loss function is the same as Equation 3. 4 Experiment In this section, we conduct experiments to compare our proposed models and other baseline models.",
"Specifically, we mainly answer the following evaluation questions: EQ1: Are TPC-GCN and DTPC-GCN able to improve the performance of controversy detection?",
"EQ2: How effective are different information in TPC-GCN, including the content of topics, posts, and comments as well as the topic-post-comment structure?",
"EQ3: Can DTPC-GCN learn disentangled features and dynamically fuse them for controversy detection?",
"We perform our experiments on two real-world datasets in different languages.",
"Table 1 shows the statistics of the two datasets.",
"The details are as follows: Reddit Dataset The Reddit dataset released by Hessel and Lee (2019) and Jason Baumgartner of pushshift.io is the only accessible English dataset for controversy detection of social media posts.",
"This dataset contains six subreddits (which can be regarded as over-arching topics): AskMen , AskWomen , Fitness , LifeProTips , personalfinance , and relationships .",
"Each post belongs to a subreddit and the number of attached comments is ensured to be over 30.",
"The tree structure of the comments is also maintained.",
"We use the comment data in the first hour after a post is published.",
"Weibo Dataset We built a Chinese dataset for controversy detection on Weibo 3 in this work.",
"We first manually selected 49 widely discussed, multi-domain topics from July 2017 to August 2019 (see Appendix A).",
"Then, we crawled the posts on those topics and preserved those with at least two comments.",
"Here we rebuilt the comment tree according to the comment time and usernames due to the lack of officially-provided structure.",
"Finally, annotators were asked to read and then annotate the post based on both of the post content and the user stances in the comments/replies.",
"Each post was labeled by two annotators(Cohen's Kappa coefficient = 0.71).",
"When the disagreement occurred between the annotators, the authors discussed and determined the labels.",
"In total, this dataset contains 1,992 controversial posts and 3,684 non-controversial posts, which is in line with the distribution imbalance in the real-world scenario.",
"As far as we know, this is the first released dataset for controversy detection on Chinese social media.",
"We use at most 15 comments of each post due to the computation limit.",
"In the intra-topic experiment: For the Weibo dataset, we randomly divided with a ratio of 4:1:1 in each topic and merged them respectively across all topics.",
"For the Reddit dataset, we apply the data partition provided by the authors.",
"The ratio is 3:1:1.",
"In the inter-topic experiments: For the Weibo and Reddit dataset, we still divided with a ratio of 4:1:1, but on the topic level.",
"In the (D)TPC-GCN model, each node is initialized with its textual content using the pre-trained BERT 4 (BERT-Base Chinese for Weibo and BERT-Base Uncased for Reddit) and the padding size for each is 45.",
"We only fine-tune the last layer, namely layer 11 of BERT for simplicity and then apply a dense layer with a ReLU activation function to reduce the dimensionality of representation from 768 to 300.",
"In TPC-GCN, the sizes of hidden states of the two GCN layers are 100 and 2, respectively, with ReLU for the first GCN layer.",
"To avoid over-fitting, a dropout layer is added between the two layers with a rate of 0.35.",
"We apply a softmax function to the fusion vector for obtaining the controversy probability.",
"In DTPC-GCN, the size of 3 http://mcg.ict.ac.cn/ controversy-detection-dataset.html 4 https://github.com/google-research/ bert hidden states of the first and second GCN layers in each branch are 32 and 16.",
"The dropout rate between two GCN layers in each branch is set to 0.4.",
"The batch size in our (D)TPC-GCN model is 1 (1 TPC graph), and 128 (posts and attached replies) in our PC-GCN model and baselines.",
"The optimizer is BertAdam 5 in all BERT-based models and Adam (Kingma and Ba, 2014) in the other semantic models.",
"The learning rate is 1e-4 and the total epoch is 100.",
"We report the best model according to the performance on the validation set.",
"In those semantic models that are not based on BERT, we use two publicly-available big-scale word embedding files to obtain the model input, sgns.weibo.bigram-char 6 for Weibo and glove.42B.300d 7 for Reddit.",
"To validate the effectiveness of our methods, we implemented several representative methods including content-based, structure-based and fusion methods as baselines.",
"We implement mainstream text classification models including TextCNN (Kim, 2014), BiLSTM-Att (bi-directional LSTM with attention) BiLSTM (Graves and Schmidhuber, 2005; Bah-danau et al., 2015), BiGRU-Att (bi-directional GRU with attention) (Cho et al., 2014), BERT (De-vlin et al., 2019) (only fine-tune the last layer for simplicity).",
"For a fair comparison, we concatenate the post and its attached comments together as the input, instead of feeding the post only.",
"Structure-based Methods Considering that structure-based features of the post and its comment tree are rare and nonsystematic in previous works, we integrate the plausible features in (Coletto et al., 2017) and (Hessel and Lee, 2019).",
"As the latter paper does, we feed them into a series of classifiers and choose a best model for classification.",
"We name the method SFC .",
"For a post-comment graph, the feature set contains the average depth (average length of root-to-leaf paths), the maximum relative degree (the largest node degree divided by the degree of the root), CRATE features (the logged reply time between the post and comments, or over pairs of comments), 5 https://pypi.org/project/ pytorch-pretrained-bert/ 6 https://github.com/Embedding/ Chinese-Word-Vectors 7 https://nlp.stanford.edu/projects/ glove/ Method Weibo Dataset Reddit Dataset Avg.",
"and C-TREE features (statistics in a comment tree, such as maximum depth/total comment ratio).",
"Fusion Method The compared fusion method from (Hessel and Lee, 2019) aims to identify the controversial posts with semantic and structure information.",
"They extract text features of topics, posts, and comments by BERT and structural feature including the CRATE and C-TREE features mentioned above.",
"In addition, publish time features are also exploited.",
"To answer EQ1 , we compare the performance of proposed (D)TPC-GCN with mentioned baselines on the two datasets.",
"The evaluation metrics include the macro average precision (Avg. P), macro average recall (Avg. R), macro average F1 score (Avg. F1), and accuracy (Acc.).",
"Table 2 and 3 show the performance of all compared methods for intra-topic detection and inter-topic detection respectively.",
"In the intra-topic experiments, we can see that 1) TPC-GCN outperforms all compared methods on the two datasets.",
"This indicates that our model can effectively detect controversy with a significant generalizability on different datasets.",
"2) The structure-based model, SFC, reports the low scores on the two datasets, indicating that the statistical structural information is insufficient to timely identify the controversy.",
"3) The fusion models outperform or are comparable to the other baselines, which proves that information fusion of content and structure is necessary to improve the performance.",
"In the inter-topic experiments, we can see that 1) DTPC-GCN outperforms all baselines by 6.4% of F1 score at least, which validates that DTPC-GCN can detect controversy on unseen or dissimilar topics.",
"2) DTPC-GCN outperforms TPC-GCN by 3.74% on Weibo and 4.00% on Reddit.",
"This indicates that feature disentanglement and dynamic fusion can significantly improve the performance of inter-topic controversy detection.",
"To answer EQ2 and part of EQ3 , we also evaluate several internal models, i.e., the simplified variations of (D)TPC-GCN by removing some components or masking some representations.",
"By the ablation study, we aim to investigate the impact of content and structural information in TPC-GCN and topic-related and topic-unrelated information in DTPC-GCN.",
"Ablation Study of TPC-GCN We delete certain type of nodes (and the edges connect to them) to investigate their overall impact and mask the content by randomizing the initial representation to investigate the impact of content.",
"Specifically, we investigate on the following simplified models of TPC-GCN: PC-GCN / TP-GCN : discard the topic / comment nodes.",
"(RT)PC-GCN / T(RP)C-GCN / TP(RC)-GCN : randomly initialize the representation of topic / post / comment nodes.",
"From Table 4, we have the following observations: 1) TPC-GCN outperforms all simplified models, indicating that the necessity of structure and content from all types of nodes.",
"2) PC-GCN uses no extra information (the information of other posts in the same topic), the performance is still better than the baselines (Table 2 and 4), showing the effectiveness of our methods.",
"3) The models deleting comment information, i.e., TP-GCN and TP(RC)-GCN, experience a dramatic drop in performance, which shows the comment information is of the most importance.",
"4) The effect of structural information varies in the different situations.",
"Without the contents, the comment structure can individually work (TP(RC)-GCN > TP-GCN), while for topics, the structure has to collaborate with the contents ((RT)PC-GCN < PC-GCN on the Weibo dataset).",
"We focus on the roles of the U (topic-unrelated) branch and R (topic-related) branch:",
"Table 5 shows that both of the two branches can identify controversial posts well, but their performances are worse than the fusion model.",
"Specifically, the U branch performs slightly better than R , indicating the topic-unrelated features are more suitable for inter-topic detection.",
"We infer that the two branches can learn good but different representation under the guide of the auxiliary task.",
"We conduct a case study to further answer EQ3 from the perspective of samples.",
"We compare the attention weight of the U and R branch in DTPC-GCN and exhibit some examples where the final decisions lean on one of the two branches.",
"Figure 3 shows two examples in the testing set of the Weibo dataset.",
"The DTPC-GCN rely more on the topic-unrelated features from Branch U when classifying Post 1 ( 0 . 874 > 0 . 126 ), while more on the topic-related features from Branch R when classifying Post 2 ( 0 . 217 < 0 . 783 ).",
"The topic of Post 1 , Cancel the Driving License , is weakly relevant to topics in training set, and the comments mostly use topic-unspecific words such as simple support and good proposal .",
"Thus, the topic-unrelated features are more beneficial for judging.",
"In contrast, Post 2 discusses the death penalty for women and children traffickers, relevant to one of the topics in the training set, Improve Sentencing Standards for Sexually Assault on Children .",
"Further, both of the two topics are full of comments on death penalty .",
"Exploiting more of the topic-related features is reasonable for the final decision.",
"By conducting the error analysis on 186 misclas-sified samples in the Weibo dataset, we find three main types of samples that lead to the misclassi-fication: 1) 22.6% of the wrong samples are with too much noise in the comments, including unrelated and neutral comments.",
"2) 16.1% are with a very deep tree structure.",
"This kind of structure is helpful for controversy detection (Hessel and Lee, 2019), but the ability of GCN to obtain information from this kind of structure is limited.",
"3) 10.2% are with obscure and complex statements.",
"These wrong cases indicate that better handling the noisy data, learning more deep structural features, and mining the semantic more deeply have the potential to improve the performance.",
"In this paper, we propose a novel method TPC-GCN to integrate the information from the graph structure and content of topics, posts, and comments for post-level controversy detection on social media.",
"Unlike the existing works, we exploit the information from related posts in the same topic and the reply structure for more effective detection.",
"To improve the performance of our model for inter-topic detection, we propose an extension of TPC-GCN named DTPC-GCN, to disentangle the topic-related and topic-unrelated features and then dynamically fuse them.",
"Extensive experiments conducted on two datasets demonstrate that our proposed models outperform the compared methods and prove that our models can integrate both semantic and structural information with significant genaralizablity.",
"The authors thank Peng Qi, Mingyan Lu, Guang Yang, and Jiachen Wang for helpful discussion.",
"This work is supported by the National Nature Science Foundation of China (U1703261)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"other",
"other"
] |
[
"Recent neural text-to-SQL models can effectively translate natural language questions to corresponding SQL queries on unseen databases.",
"Working mostly on the Spider dataset, researchers have proposed increasingly sophisticated solutions to the problem.",
"Contrary to this trend, in this paper we focus on simplifications.",
"We begin by building DuoRAT, a re-implementation of the state-of-the-art RAT-SQL model that unlike RAT-SQL is using only relation-aware or vanilla transformers as the building blocks.",
"We perform several ablation experiments using DuoRAT as the baseline model.",
"Our experiments confirm the usefulness of some techniques and point out the redundancy of others, including structural SQL features and features that link the question with the schema 1 .",
"Language user interfaces to databases allow nonspecialists to retrieve and process information that might otherwise not be easily available to them.",
"Much of the recent research in this area has focused on neural models that can generalize to new relational databases without any human intervention.",
"Given a relational database schema (and often also content), such models translate the user's question directly into an SQL query (Zhong et al., 2017; Yu et al., 2018a; Bogin et al., 2019).",
"Such cross-database text-to-SQL research was spurred by the introduction of large datasets such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018b) that feature utterance-query pairs for hundreds or even thousands of databases.",
"State-of-the-art text-to-SQL models employ many sophisticated techniques.",
"These include, but are not limited to, grammar-constrained models Equal contribution, order was determined by a quantum random number draw.",
"and recurrent neural networks with parent feeding (Yin and Neubig, 2018), intermediate meaning representations (Guo et al., 2019; Suhr et al., 2020), relation-aware attention (Wang et al., 2020), schema linking and table joining heuristics (Guo et al., 2019), slot filling (Choi et al., 2020) and re-ranking (Kelkar et al., 2020) models.",
"The high complexity of these models raises the barrier of entry and can slow down text-to-SQL research.",
"In this work, we attempt to distill the essence of high-performing text-to-SQL systems.",
"We start with a transformer-only reimplementation of the state-of-the-art RAT-SQL model (Wang et al., 2020).",
"Importantly, our resulting DuoRAT model trains three times faster than RAT-SQL.",
"We then systematically study how DuoRAT can be simplified without losing performance.",
"Our ablation study confirms the usefulness of many but not all techniques employed in RAT-SQL.",
"For example, we show that the benefits of explicit matching of question spans with the column or table names ( name-based schema linking , NBSL) become marginal when a pretrained transformer (De-vlin et al., 2018) is used to jointly encode the question and the schema.",
"By contrast, we confirm the benefit of using a grammar to constrain the inference to only produce well-formed queries.",
"These and other findings of our work bring much-needed insight of what enables higher performance in mod-ern text-to-SQL models.",
"Our base model, DuoRAT, is a reimplementation of RAT-SQL (Wang et al., 2020).",
"It is an encoder-decoder model with attention and a pointer-network copy mechanism, Fig. 1. Contrary to RAT-SQL, both the encoder and the decoder are relation-aware transformers (Shaw et al., 2018).",
"The input is modelled as a labelled directed graph, where the nodes are the input tokens and the edges are the so-called relations , see below.",
"Relation-Aware Attention Compared to vanilla self-attention, relation-aware self-attention takes two additional tensor inputs, key relations r ( K ) ij and value relations r ( V ) ij , that amplify or diminish contributions in the scaled dot-product attention for each of the H attention heads: e ( h ) ij = x i W ( h ) Q (cid:16) x j W ( h ) K + r ( K ) ij (cid:17) (cid:124) (cid:112) d z / H (1) ( h ) ij = exp e ( h ) ij nk = 1 exp e ( h ) ik (2) z ( h ) i = n j = 1 ( h ) ij (cid:16) x j W ( h ) V + r ( V ) ij (cid:17) , (3) where x i R d x is the i -th element of the input sequence, ( h ) ij is the attention weight coefficient for the h -th head, and z ( h ) i R d z / H is the i -th output element.",
"The indices i and j run from 1 to n , where n is the length of the sequence.",
"W ( h ) Q , W ( h ) K , and W ( h ) V R d x d z / H are trainable weight matrices.",
"r ( K ) ij R d z / H and r ( V ) ij R d z / H represent a directed labelled edge pointing from the i -th input x i to the j -th input x j .",
"Following Shaw et al. (2018), we set r ( K ) ij = r ( V ) ij = r ij .",
"The relations r ij are shared across all layers.",
"Let R be the total number of relational edge labels.",
"If the relation s { 1 ,..., R } exists between the i -th and j -th input, then we assign the s -th learned embedding r ( s ) ij to r ij .",
"Otherwise, we use padding.",
"Encoder The DuoRAT encoder is divided into two stages, a pretrained relation-unaware transformer stage followed by a relation-aware transformer stage that is trained from scratch.",
"The first stage is initialized with BERT weights (Devlin et al., 2018) and is fed embeddings of the question tokens, the table name tokens, the column name tokens, and one token for each column data type.",
"We add [CLS] tokens between segments, cf.",
"Fig. 1 for the input layout.",
"The second stage has two inputs: an input sequence and the input relations corresponding to graph nodes and labelled edges, respectively.",
"The input sequence is comprised of the BERT outputs for all question token positions and the [CLS] token position outputs for each table and each column.",
"We use relational edge labels similar to those introduced by RAT-SQL.",
"The labels are divided into three groups;",
"(i) schema-linking relations,",
"(ii) table-column relations, and",
"(iii) foreign-key relations.",
"(i) Schema-linking relations provide explicit alignment between the question and the schema.",
"We distinguish between name-based schema linking (NBSL) and content-based schema linking (CBSL), where the former uses the names of tables and columns only and the latter uses the database content.",
"An example for NBSL is when the question references a table by name, like singer in Fig. 1. CBSL identifies when the question references a value in a database column, e.g. the word France in Fig. 1. We use a common schema-linking heuristic where question n-grams are compared at the character level with names of tables and columns for NBSL and with the contents of column cells for CBSL.",
"(ii) The table-column relations describe which columns belong to which tables, and which columns occur in the same table.",
"Finally,",
"(iii) the foreign-key relations indicate the foreign key constraints between columns.",
"See Appendix A for a complete list of the encoder relations.",
"(Yin and Neubig, 2018) to relation-aware transformers.",
"Like in the original framework, the decoder is restricted to generate only those sequences of grammar actions that encode a valid SQL abstract syntax tree (AST), see Appendix B. We consider two output grammars, one for complete SQL and another for SQL with underspecified FROM clause (SQLUF ) (Suhr et al., 2020).",
"In a SQLUF query, the FROM clause is replaced by the UF clause that contains only the tables that were not mentioned in other clauses of the original SQL query.",
"After decoding a SQLUF query, we recover the FROM clause by adding tables from other clauses and joining them using the foreign-key relations.",
"RAT-SQL and the TRANX framework use a custom parent-feeding LSTM decoder where the LSTM is fed also its own state from a previous step at which the constructor of the current action's parent AST node was generated.",
"By contrast, in DuoRAT's relation-aware transformer decoder, we experiment with relations that are derived from the structure of the SQL program code, see Appendix A for a list.",
"The relations can bias the transformer decoder towards attending AST parent or sibling nodes, allowing for the model to get a sense of the AST's structure.",
"However, it is unclear whether or not this is necessary in a model with self-attention.",
"The decoder is coupled to the encoder via a relation-aware memory attention mechanism.",
"Here we use relations to indicate which tokens from the input were copied to the output, that is, either question tokens, tables, or columns, depending on the type of the literal that was produced.",
"For most of our experiments we use the Spider dataset (Yu et al., 2018b) and evaluate the predicted SQL with the exact-match (EM) accuracy from the official Spider evaluation script.",
"The Spider training set contains 8,659 questions for 146 databases.",
"The Spider development set contains 1,034 questions for 20 databases.",
"We exclude the baseball1 questions from the training data because the schema of this database is too large.",
"To compare DuoRAT to the models in the literature we evaluate it on the original development set as released on January 8, 2019.",
"In all other experiments we use the corrected development set that was released on June 7, 2020.",
"We also test our Spider-trained models on several System EM (dev.) RYANSQL (Choi et al., 2020) 70 .",
"earlier single-database text-to-SQL datasets, see Section 3.4 for more details on that and Appendix C for details on the training procedure.",
"Table 1 compares DuoRAT's performance on the Spider development set to that of other state-of-the-art models 2 .",
"DuoRAT performs similarly to its close relative RAT-SQL and outperforms other recently proposed models.",
"Importantly, DuoRAT training takes roughly two days compared to six days for RAT-SQL.",
"We associate the difference in speed with replacing RAT-SQL's LSTM-with-parent-feeding decoder with a transformer.",
"Schema Linking Prior work (Wang et al., 2020; Guo et al., 2019) attributes a high value to schema linking, that is, to the engineering of features that ground the user utterance in the database domain.",
"However, this insight rests entirely on experiments without BERT.",
"We find that, for our BERT-based DuoRAT model, name-based schema linking (NBSL) can be disabled with a negligible loss in performance (see Table 2) while content-based schema linking (CBSL) can not.",
"The result suggests that a BERT encoder fine-tunes to perform computation that makes heuristic NBSL redundant.",
"To gain further understanding of whether or how BERT does this, we conduct an experiment in which the inputs to BERT are divided into two logical segments: the question and the schema.",
"We shape the attention mask such that the question segment attends to the schema or the schema attends to the question, or both, or neither.",
"The results are shown in Table 2. We observe that, for the best performance, BERT should 2 Results taken from the Spider leaderboard at https:// yale-lily.github.io/spider on October 19, 2020.",
"be jointly embedding the question and the schema.",
"We can neither embed the question separately from the schema nor the schema separately from the question without substantial performance losses.",
"Interestingly, once we cut all the attention connections between the question and the schema, explicit NBSL becomes essential.",
"This confirms our hypothesis that joint BERT-based encoding of the question and the schema is the cause of the low importance of NBSL in our model.",
"Schema Structure Representation Various ways of encoding which columns belong to which table have been explored: Suhr et al. (2020) orders the schema elements such that each table name is followed by the names of the columns of that table, RAT-SQL (Wang et al., 2020) represents schema structure as table-column relations in the RAT encoder.",
"Our experiments show that encoding the schema structure via encoder relations gives the best performance (first row of Table 4), and encoding it in the order of its elements (third row) is better than not encoding it at all (second row).",
"Additional results can be found in Appendix D. 3.3 Decoder Ablations Decoder Relations And Grammar-Based Constraining Table 3 shows results of ablation stud-Model Variant EM (dev.) [column][table] + relations 69 .",
"ies in which",
"(i) different kinds of decoder relations were removed, in particular those that provide program structure information, and",
"(ii) grammar-based constraining was deactivated during training and/or inference.",
"The experiments provide the following insights:",
"(i) A vanilla transformer decoder can be used without loss of performance.",
"The relations that provide information about the AST and about which literals were copied from the input are not useful.",
"We speculate that AST information can be more useful in deeply nested SQL expressions of which Spider contains only few.",
"(ii) By contrast, grammar constraining at inference leads to significant performance improvements.",
"Notably, training tends to be less stable when not using grammar constraints.",
"The usefulness of grammar constraining can be explained by the fact that it reduces the output space and makes the decoder more data-efficient.",
"Output Format In this section, we examine the performance on the Spider dataset when outputting complete SQL and when outputting simplified SQL with underspecified FROM clause, SQLUF .",
"The results are reported in Table 6. Our first insight is that DuoRAT performance with and without SQLUF is almost the same.",
"Our second insight is that when the encoder does not have access to information about the foreign keys, SQLUF brings a significant improvement.",
"The best result is still achieved with a model that uses the foreign-key input relations.",
"A known issue of the Spider dataset is that the question wordings are unnaturally close to the respective queries (Suhr et al., 2020).",
"To complement our studies on Spider, we perform additional experiments on single-database text-to-SQL datasets that are devoid of this issue.",
"Suhr et al. (2020) propose a demanding cross-domain generalization evaluation whereby models are trained on Spider and Dataset DuoRAT + SQLUF w/o CBSL w/o SQLUF GeoQuery 54 .",
"tested on single-database datasets by comparing the execution results of the predicted and the gold queries.",
"We follow this methodology and filter the datasets to only use those question-query pairs for which execution accuracy evaluation is appropriate (see Suhr et al. (2020) for filtering details).",
"To focus on evaluating the query structure, we replace predicted string literals with the most similar ones from the gold query (details of this procedure can be found in Appendix E).",
"Of the 8 datasets that Suhr et al. (2020) consider we exclude ATIS, Advising, and Scholar for being too different from Spider and Restaurants for having just 27 examples after filtering.",
"What remains are the SQL version of the GeoQuery dataset (Zelle and Mooney, 1996) as well as the small Academic, IMDB, and Yelp datasets (Finegan-Dollak et al., 2018).",
"After filtering, these datasets are left with 532, 180, 107 and 54 examples, respectively.",
"Note that these single-database datasets are partially contained in Spider.",
"To avoid testing on training data, we train new models only on the about 7,000 examples produced by the Spider data collection effort.",
"In this round of analysis we focus on the impact of CBSL and the underspecified FROM clause (SQLUF ) technique (Suhr et al., 2020).",
"We expect both methods to be especially useful for out-of-distribution generalization, despite the limited importance that Spider evaluation attributes to them.",
"The results in Table 5 show that, in line with our intuition, CBSL and SQLUF bring performance gains on 2 and 3 out of 4 datasets, respectively.",
"To enable comparison with results by Suhr et al. (2020), we also report the results without literal replacement in Table 7. DuoRAT performs consistently better than the model by Suhr et al. (2020).",
"Our investigations have revealed several possible simplifications of relation-aware text-to-SQL transformer models.",
"In particular, we have shown that a transformer decoder with vanilla selfand memory-attention is sufficient, and that heuristic schema linking based on table and/or column names brings only a marginal benefit.",
"To the contrary, we confirm the importance of grammar-constrained decoding, relational schema representations, content-based schema linking.",
"Looking forward, we believe that content-based schema-linking will remain important, while the impact of name-based schema linking will further decrease as the language models get bigger and absorb more data.",
"This prediction is based on the fact that the mapping from an entity to the entity type that name-based schema linking effectively performs can be highly domainand schema-specific.",
"Last but not least, we have shown that predicting a more com-pact SQL version with an underspecified FROM clause improves the model's out-of-distribution performance, despite bearing little influence on the model's performance on Spider.",
"In future work, we will combine the successful simplifications from this paper to build the simplest yet high-performing text-to-SQL model.",
"One promising direction for further simplification is to use a pretrained encoder-decoder pair as proposed in Raffel et al. (2020) and Lewis et al. (2019)."
] | [
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"result",
"method",
"method",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"method",
"method",
"abstain",
"result",
"method",
"abstain"
] |
[
"The wanton spread of hate speech on the internet brings great harm to society and families.",
"It is urgent to establish and improve automatic detection and active avoidance mechanisms for hate speech.",
"While there exist methods for hate speech detection, they stereotype words and hence suffer from inherently biased training.",
"In other words, getting more affective features from other affective resources will significantly affect the performance of hate speech detection.",
"In this paper, we propose a hate speech detection framework based on sentiment knowledge sharing.",
"While extracting the affective features of the target sentence itself, we make better use of the sentiment features from external resources, and finally fuse features from different feature extraction units to detect hate speech.",
"Experimental results on two public datasets demonstrate the effectiveness of our model.",
"With the prevalence of mobile Internet and social media, phenomena such as the malicious spread of hate speech have gradually become widespread.",
"This often has incalculable consequences and has become a serious social problem.",
"How to quickly and accurately detect hate speech automatically, and then better intervene to prevent it has become one of the hot research issues in the field of natural language processing.",
"The automatic detection of hate speech can prevent the viral spread of hate speech, thereby reducing the malicious spread of cyberbullying and harmful information.",
"In the field of public opinion analysis, monitoring and intervention, hate speech detection has extensive value in application.",
"challenging due to the inherent complexity of the natural language constructs.",
"Most of the existing works revolves either around rules (Krause and Grassegger, 2016) or manual feature extraction (Gitari et al., 2015).",
"Rule-based methods do not involve learning and typically rely on a pre-compiled list or dictionary of subjectivity clues (Haralam-bous and Lenca, 2014).",
"Chen et al. (2012) proposed a variety of linguistic rules to determine whether a sentence constitutes hate speech or not.",
"For example, if a second-person pronoun and a derogatory word appear at the same time, such as <you, gay>, the sentence is judged to be insulting.",
"This type of method not only requires manual formulation of rules, but also requires dictionaries of derogatory words.",
"There have also been many attempts to detect hate speech using traditional machine learning methods.",
"Mehdad and Tetreault (2016) extracted the n-gram, character-level and sentiment features of text and used support vector machines (SVM) to detect hate speech.",
"However, artificial features can only reflect the shallow features of text and cannot understand content from the deep semantic features.",
"Deep learning methods have been widely used in the field of hate speech detection and have achieved good performance (Badjatiya et al., 2019 Qian et al., 2018) in recent years.",
"Wang (2018) compared the performance of various neural network models in detecting hate speech and used visualization techniques to give the models better interpretability.",
"The semantics of hate speech contains a strong negative sentiment tendency.",
"The deep learning methods of predecessors often only used pre-trained models or deeper networks to obtain semantic features, ignoring the sentiment features of the target sentences and external sentiment resources, which also makes the performance of neural networks unsatisfactory in hate speech detection.",
"To overcome the weaknesses of previous works, 7159 we propose a hate speech detection framework based on sentiment knowledge sharing (SKS) 1 .",
"Our intuition is that most hate speech contains words with strong negative emotions, which are usually the most direct clues to hate speech.",
"Meanwhile, as claimed by Davidson et al. (2017), lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech.",
"Therefore, we hope to make better use of external sentiment resources so that the model can learn sentiment features and share them, which will greatly affect the performance of hate speech detection.",
"In addition, inspired by the recent MoE layer (Shazeer et al., 2017) and the Multi-gate Mixture-of-Experts (MMoE) model (Ma et al., 2018), we use multiple feature extraction units and use a gated attention mechanism to fuse features.",
"The main contributions of this work are summarised as follows: (1) In view of the lack of the use of sentiment information in previous works, we not only integrate the derogatory words of target sentences into the neural network, but also use multi-task learning to make the model learn and share external sentiment knowledge.",
"(2) In order to better capture shared task or task-specific information, we propose a new framework which uses multiple feature extraction units where each extraction unit uses the multi-head attention mechanism and a feedforward neural network to extract features, and finally uses gated attention fuse features.",
"(3) Experimental results on the SemEval-2019 task-5 and Davidson datasets demonstrate that our method achieves state-of-the-art performance compared with strong baselines, and then further detailed examples verify the effectiveness of our presented model for hate speech detection.",
"Hate speech is very dependent on the nuance of language.",
"Even if it is manually distinguished whether certain sentences contain hate semantics, consensus is rare (Waseem, 2016).",
"Recently, automatically detecting hate speech has been widely studied by researchers.",
"In this section, we will review related works on traditional machine learning-based methods, deep learning-based methods, and multi-task learning-based methods of hate speech detection.",
"1 Codeisavailableathttps://github.com/1783696285/SKS.",
"Machine learning-based methods based on feature engineering are widely used in the field of hate speech detection.",
"Malmasi and Zampieri (2018) provided empirical evidence that n-gram features and sentiment features can be successfully applied to the task of hate speech detection.",
"Rodrguez et al. (2019) constructed a dataset of hate speech from Facebook, and proposed a rich set of sentiment features, including negative sentiment words and negative sentiment symbols, to detect hate speech.",
"Del Vigna12 et al. (2017) used the sentimental value of words as the main feature to measure whether a sentence constitutes hate speech.",
"Gitari et al. (2015) designed several sentiment features and achieved good performance in experiments.",
"Previous studies have shown that sentiment features play an important role in hate speech detection.",
"Recently, deep learning-based methods have garnered considerable success in hate speech detection.",
"Zhang et al. (2018) fed input into a convolutional neural network (CNN) and a gated recurrent unit (GRU) to learn higher-level features.",
"Kshir-sagar et al. (2018) proposed a transformed word embedding model (TWEM), which had a simple structure but can achieve better performance than many complex models.",
"Badjatiya et al. (2019) found that due to the limitation of the training set, the deep learning model would have bias and he designed and implemented a bias removal strategy to detect hate speech.",
"Tekiroglu et al. (2020) constructed a large-scale dataset based on hate speech and its responses and used the pre-trained language model, GPT-2, to detect hate speech.",
"Obviously, deep learning models can extract the latent semantic features of text, which can provide the most direct clues for detecting hate speech.",
"Multi-task learning can learn multiple related tasks and share knowledge at the same time.",
"In recent years, there have been some achievements in the field of hate speech detection.",
"Kapil and Ekbal (2020) proposed a deep multi-task learning (MTL) framework to leverage useful information from multiple related classification tasks in order to improve the performance of hate speech detection.",
"Liu et al. (2019) introduced a novel formulation of a hate speech type identification problem in the setting of multi-task learning through their proposed fuzzy ensemble approach.",
"Ousidhoum et al. (2019) presented a new multilingual multi-aspect hate speech analysis dataset and used 7160 Multi-Head Attention Feed Forward Max Pooling Avg Pooling Feed Forward Multi-Head Attention Feed Forward Max Pooling Avg Pooling Feed Forward Multi-Head Attention Feed Forward Max Pooling Avg Pooling Feed Forward (cid:17)(cid:17)(cid:17) Word Embedding Category Embedding Gate1 Feed Forward Feed Forward Hate Speech Task Sentiment Task Gate2 Figure 1: The overall framework of our proposed Hate Speech Detection based on Sentiment Knowledge Shar-ing(SKS).",
"it to test the current state-of-the-art multilingual multitask learning approaches.",
"Ousidhoum et al. (2019) proposed BERT-based multi-task learning for offensive language detection.",
"Some studies have shown that multi-task learning can improve the performance and generalization ability of models in hate speech detection by using the correlation between the task of sentiment analysis and hate speech detection.",
"In this section we introduce our model, SKS.",
"Our model is able to improve hate speech detection by considering both target sentence sentiment and external sentiment knowledge.",
"The overall architecture of SKS is shown in Figure",
"1. The framework consists mainly of three layers: 1) Input layer.",
"In order to better obtain the sentiment features of the sentence itself, we use a derogatory words dictionary to judge whether each word is a hate word, and then append the category information to the word embedding.",
"2) Sentiment knowledge sharing layer.",
"Since sentiment analysis and hate speech detection are highly correlated, we use the multi-task learning framework to model task relationships and learn task-specific features to take advantage of shared sentiment knowledge.",
"We use multiple feature extraction units composed of a multi-head attention layer and a feedforward neural network.",
"3) Gated attention layer.",
"A gated attention mechanism is used to output the probability that the feature extraction unit is selected.",
"Finally, a feedforward neural network is used to detect hate speech.",
"Hate speech often contains obvious negative sentiment words because of the strong negative sentiment.",
"exp1: Go fucking kill yourself and die already useless ugly pile of shit scumbag.",
"The words fucking, ugly, and shit scum-bag in exp1 are all obviously insulting and offensive, and they contain strong negative sentiment.",
"Obviously, whether the word in the target sentence is a derogatory word is the most direct clue to judge hate speech.",
"Therefore, paying attention to capturing derogatory words in a sentence can help us improve hate speech detection.",
"Word Embedding.",
"Word Embedding is based on distributed assumptions and mapped words into a high dimension feature space and maintaining the semantic information.",
"For each target sentence S = { w 1 , w 2 , , w N } , we transform each token w i 7161 into a real valued vector x i using word embedding, where x i R d is the word vector, d is dimensions of word vectors.",
"Category Embedding.",
"Our work is strongly based on the intuition that hate speech arises from derogatory words.",
"In other words, some specific words that are extremely insulting will make a greater contribution to judging hate speech.",
"Therefore, we have established a derogatory word dictionary.",
"The vocabulary comes from Wikipedia 2 and another website 3 , including Hate Speech, Disability, LGBT, Ethnic, and Religious, with 5 categories.",
"Since the vocabulary contains 2 or 3 word phrases, when judging whether it is a dirty word, we use n-gram, n [1,2,3].",
"The derogatory word dictionary is used to divide tweet into two categories, either containing derogatory words or not containing derogatory words,and then assign the two categories to each word in the tweet.",
"The category of each word is initialized randomly as vector C = ( c 1 , c 2 , , c n ) , c i R d .",
"Since the common word embedding representations exhibit a linear structure, that makes it possible to meaningfully combine words by an element-wise addition of their vector representations.",
"In order to better take advantage of information within derogatory words, we append the category representation to each word embedding.",
"The embedding of a word x i for a category embedding c i is x i = x i c i , where is the vector concatenation operation.",
"Due to the influence of different countries, regions, religions and cultures, insulting meanings in many languages are hidden in the underlying semantics, rather than just reflected in sentiment words.",
"exp3: i'm so fucking ready!",
"There are no obvious negative sentiment words in exp2, but the sentence constitutes hate speech.",
"Although pig is a neutral word, most people equate the word pig with stupid and clumsy.",
"Comparing Jews and pig is obviously an insult to Jews.",
"Latent semantics and common sense of sentiment are the keys to correctly judging the sentence.",
"Exp3 contains the word fucking with a strong negative sentiment.",
"This word often appears in hate speech.",
"However, in this sentence, 2 https://www.wikipedia.org/ 3 https://www.noswearing.com/ fucking does not specifically refer to a person, but is just an adverb of degree, which strengthens the tone.",
"It is not hate speech.",
"It can be seen from the above example that although hate speech often contains negative sentiment words, only using the sentiment information of the target sentence itself to detect hate speech often makes it difficult to obtain satisfactory performance.",
"Deep learning methods require a large amount of labelled data for supervised learning, which needs more human effort and prior knowledge of this particular task.",
"High-quality annotation data is scarce in hate speech detection, which makes the task stereotype words and hence suffer from inherently biased training.",
"Sentiment analysis research has been carried out for many years, and there are abundant high-quality labelled datasets.",
"There is a high degree of correlation between two tasks, and multi-task learning can use the correlation between multiple tasks to improve the performance and generalization ability of the model in each task.",
"Therefore, we adopt a multi-task learning method for sentiment knowledge sharing, so as to better extract sentiment features and apply them to hate speech detection.",
"The framework of multi-task learning widely uses a shared-bottom structure, and different tasks share the bottom hidden layer.",
"This structure can essentially reduce the risk of overfitting, but the effect may be affected by task differences and data distribution.",
"We adopt the framework structure of Mix-of-Expert (MoE).",
"The MoE layer has multiple identical feature extraction units, which share the output of the previous layer as input and outputs to a successive layer.",
"Then, the whole model is trained in an end-to-end way.",
"Our feature extraction units layer is composed of a multi-head attention layer and two feed forward neural networks.",
"Multi-head Attention Layer.",
"The self-attention mechanism connects any two words in a sentence by calculating the semantic similarity and semantic features of each word in the sentence and other words so as to better obtain the long-distance dependency.",
"The multi-head self-attention proposed by Vaswani et al. (2017) is used in this section.",
"For a given query Q R ( n 1 d 1 ) , key K R ( n 1 d 1 ) , value V R ( n 1 d 1 ) , we use the dot product to calculate attention parameters.",
"The formula is as follows: Attention(Q , K , V) = softmax ( QKT d 1 ) V (1) 7162 where d 1 is the number of hidden layer unites.",
"The multi-head attention mechanism maps the input vector X to query, key, and value using linear changes.",
"In our task, key=value.",
"Then, the model learns the semantic features between words through the l-time attention.",
"For the i-th attention head, let the parameter matrix W Qi R n 1 d 1 l , W Ki R n 1 d 1 l , W Vi R n 1 d 1 l , we use the dot product to calculate the semantic features between them: M i = Attention(QW Qi , KW Ki , VW Vi ) (2) The vector representation obtained by the multihead attention mechanism is concatenated to obtain the final feature representation: H s = concat ( M 1 , M 2 , . . . , M l ) W o (3) Pooling Layer.",
"Shen et al. (2018) used maximum pooling and average pooling to fuse features.",
"Experimental results showed that the performance of this method is significantly better than using a single pooling strategy.",
"Therefore, we use maximum pooling and average pooling at the same time.",
"The formula is as follows: P m = Pooling _ max (H s ) (4) P a = Pooling _ average (H s ) (5) P s = concat (P m , P a ) (6) 3.3 Gated Attention Gated attention can learn to select a subset of the feature extraction units to use, conditioned on the input.",
"For different tasks, the weight selection of the model is different, so each task has a Gate.",
"The output of a specific gate k represents the probability of a different feature extract unit being selected, and multiple units are weighted and summed to obtain the final representation of the sentence, which will be passed into the exclusive layer of the task.",
"Our gating unit has the same structure as the feature extraction unit.",
"The formula is as follows: g k ( x ) = softmax ( W gn gate ( x )) (7) f k ( x ) = n i =1 g k ( x ) i f i ( x ) (8) Dataset total Classes SE 11,971 hate (5,035) non-hate (6,936) DV 24,783 hate (1,430) non-hate (23,353) SA 31,962 negative(2,242) positive(29,720) Table 1: Statistics of datasets used in the experiment.",
"where k is the number of tasks and h is the hidden layer representation.",
"For training process, the whole parameters can be optimized from our networks.",
"Then, cross entropy is applied with L2 regularization as the loss function, which is defined as: loss = i j y ji log y ji + 2 (10) where i is the index of sentences, j is the index of class, is the L 2 regularization term, is the parameter set.",
"In this section, we first introduce the datasets and evaluation metrics.",
"Then we compare the performance of our model with several strong baselines.",
"Finally, a detailed analysis is given.",
"We try to explore whether sharing sentiment knowledge can improve the performance of hate speech detection.",
"Therefore, two public hate speech datasets and one sentiment dataset is used in our experiment.",
"The details of the datasets are shown in Table",
"1. SemEval2019 task5 (SE) (Basile et al., 2019).",
"The SE comes from SemEval 2019 task 5, and subtask A is hate speech detection.",
"The dataset is divided into three subsets.",
"The training contains 9000 cases, the validation contains 1000 cases, and the test contains 2971 cases.",
"Davidson dataset (DV) (Davidson et al., 2017).",
"The DV dataset was constructed by Davidson who implemented a web-based bootstrapping algorithm to automatically collect a large number of hate 7163 speech examples from Tweets.",
"This is an unbalanced dataset with less hate speech.",
"Sentiment Analysis (SA) 4 .",
"The SA is a sentiment dataset from Kaggle2018.",
"The SA contains more positive cases, but fewer negative cases.",
"Since the test set is unlabelled, we only use the training set.",
"For comparison with baseline methods, Accuracy (Acc) and F-measure (F1) are used as evaluation metrics in our hate speech detection.",
"In SemEval2019 evaluation, the performance of the test set is the final result.",
"To compare with published papers, the results of the test set are used on the dataset and we use Acc and micro F1 as metrics.",
"For the DV dataset, we use a 5-fold cross-validation method to measure the performance of the proposed model.",
"To compare with previous works, We report results of DV using the standard Accuracy and weighted F1.",
"In our experiments, for the input layer, all word vectors are initialized by Glove Common Crawl Embeddings (840B Token), and the dimension is 300.",
"The category embeddings are initialized randomly, and the dimension is 100.",
"For the sentiment knowledge sharing layer, the multi-head attention has 4 heads.",
"The first Feed-Forward network has one layer with 400 neurons and the second has two layers with 200 neurons.",
"The dropout is used after each layer, and the rate is 0.1.",
"The optimizer is RMSprop, and the learning rate is 0.001.",
"The models are trained by a mini-batch of 512 instances.",
"To prevent overfitting, we use the learning rate decay and early stop in the training process.",
"SVM.",
"It is proposed by Zhang et al. (2018) and Basile et al. (2019).",
"The author implemented several features, such as n-gram, misspellings, derogatory words.",
"LSTM and GRU.",
"The method was proposed by Ding et al. (2019).",
"LSTM and GRU were used to extract the features of target sentences.",
"CNN-GRU.",
"Zhang et al. (2018) employed word embedding and learnt the latent semantic representations through a hybrid neural network CNN-GRU.",
"4 https://www.kaggle.com/dv1453/twitter-sentiment-analysis-analytics-vidya BiGRU-Capsule.",
"This baseline was proposed by Ding et al. (2019).",
"Two-layer BiGRU and a capsule layer were used to detect hate speech.",
"Universal Encoder.",
"It was proposed by In-durthi et al. (2019).",
"The author used sentence embeddings, such as lexical vectors and deep con-textualized word representations, to detect hate speech.",
"BERT and GPT.",
"They were proposed by Ben-balla et al. (2019).",
"The pre-trained model BERT and GPT were used to capture the features to detect hate speech.",
"SKS.",
"SKS is our proposed model which detects hate speech based on sentiment knowledge sharing.",
"shown in Table",
"2. From Table 2, we can see that: (1) Overall, the performance of the model is quite different on the two datasets.",
"For the DV dataset, the F1 value is about 90%, while for the SE dataset, the F1 value is less than 60%.",
"This is mainly because there are few negative examples in teh DV, and the model does not learn enough useful features.",
"Furthermore, the nuance of the language can significantly affect the performance of the model.",
"(2) The performance of SVM based on features is much worse than the neural network.",
"Especially on the SE dataset, performance is unacceptable.",
"This indicates that the neural network can better capture the semantic relationships of words for hate speech detection.",
"(3) The performance of the hybrid neural network is better than the simple Recurrent Neural Network (RNN).",
"Compared with the traditional RNNs, such as LSTM and so on, whether CNN-GRU or BiGRU-capsule, its performance has a small improvement.",
"By stacking a layer of a neural network onto another, a deep learning model is helpful for better learning of high-level features.",
"The traditional RNNs, such as LSTM and GRU, have almost the same performance.",
"(4) BERT achieves better performance on the DV dataset.",
"However, both BERT and GPT achieve worse performance on the SE dataset.",
"The experimental results show that the pre-training model is very dependent on the training data.",
"For the specific field, it is difficult to provide good feature representations without suitable and sufficient data.",
"(5) Our proposed method, SKS, achieves the 7164 Model DV SE Acc F1(wei) Acc F1(macro) SVM* -87.0 49.2 45.1 LSTM* 94.5 93.7 55.0 53.0 GRU* 94.5 93.9 54.0 52.0 CNN-GRU* -94.0 62.0 61.5 BiLSTM* 94.4 93.7 53.5 51.9 BiGRU_Stacked* -56.0 54.6 USE_SVM* -65.3 65.1 BERT* 94.8 95.8 -48.8 GPT* --51.5 SKS 95.1 96.3 65.9 65.2 Table 2: Comparison with existing methods.",
"best performance for F1.",
"Compared with other neural networks, including LSTM, GRU and BiL-STM, the F1 value of SKS is increased by nearly 3% on the DV dataset, and on the SE dataset, the performance of SKS greatly improves to nearly 10%.",
"Even compared with the strong baseline model, universal encoder, our model is superior.",
"The SKS is easier to implement and has fewer parameters.",
"We then analyze the influence of different parts of our model.",
"The results are shown in Table 3, where sc denotes ablation of sentiment knowledge sharing and the category embedding.",
"Similarly, -s means that sentiment data is not used as input for the model, and it only uses category embedding.",
"Based on the results in Table 3, we can see that: 1) The performance on the two datasets decreases significantly with the model ablation of sentiment knowledge sharing and category embedding.",
"However, the performance of the model is better than the existing hybrid neural networks.",
"It is shown that this framework can better learn the latent semantic features of the target sentence.",
"2) The per-Model DV SE Acc F1(wei) Acc F1(macro) no-gate 94.8 95.9 64.7 64.3 SKS 95.1 96.3 65.9 65.2 Table 4: the influence of gated attention.",
"formance of our model is improved slightly when the category embedding is used.",
"The main reason is that the information of derogatory words is highly related to hate speech, but it will also make the model too sensitive.",
"Therefore, the direct extraction of derogatory words' sentiment features has a limited impact on the performance.",
"3) SKS outperforms the other models, which proves the effectiveness of sentiment knowledge sharing directly.",
"We also analyse the role of gated attention in our model.",
"As shown in Table 4, the performance of the model is further improved on both datasets when the gated attention is used.",
"This framework is able to model the task relationships in a sophisticated way by deciding how the separations resulting from different gates overlap with each other (Ma et al., 2018).",
"Each gated network can learn to select which feature extraction unit is used on the input cases.",
"If the tasks are highly related, then sharing knowledge will achieve better performance.",
"Hate speech detection and sentiment analysis are highly correlated, so that sentiment knowledge sharing can improve the performance of hate",
"speech detection.",
"But we cannot ignore the impact of the scale of the sentiment dataset on the performance.",
"Since the scale of the DV is similar to the SA dataset, we focus our analysis on the SE dataset.",
"As shown in Figure 2, the performance of the model is poor when the ratio of the two types of data is 1:2.",
"As the ratio of sentiment data increases, the performance of the model is improved.",
"When the ratio is 2:1, the performance reaches a peak, and then maintains a declining trend.",
"It is observed that the ratio of multi-task data will also directly affect the performance.",
"In this paper, we explore the effectiveness of multitask learning in hate speech detection tasks.",
"The main idea is to use multiple feature extraction units to share multi-task parameters so that the model can better share sentiment knowledge, and then gated attention is used to fuse features for hate speech detection.",
"The proposed model can make full use of the sentiment information of the target and external sentiment resources.",
"We show that sentiment knowledge sharing improves system performance over the baselines and advances hate speech detection.",
"Finally, the detailed analysis further proves the validity and interpretability of our model.",
"Overall, our experiments give us a better understanding of the relationship between hate speech detection and sentiment analysis through multitask learning.",
"We have laid the groundwork for future efforts in better modelling and data selection, including different types of hate speech, the type and scale of sentiment data, and so on.",
"We thank our anonymous reviewers for their helpful comments.",
"This work was supported by grant from the Natural Science Foundation of China (No.62066044, 61632011, 62076046).",
"This work was also supported by Xinjiang Uygur Autonomous Region Natural Science Foundation Project No.2021D01B72 and National Youth Science Fund Project No.62006130."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Despite some empirical success at correcting exposure bias in machine translation, scheduled sampling algorithms suffer from a ma-jor drawback: they incorrectly assume that words in the reference translations and in sampled sequences are aligned at each time step.",
"Our new differentiable sampling algorithm addresses this issue by optimizing the probability that the reference can be aligned with the sampled output, based on a soft alignment predicted by the model itself.",
"As a result, the output distribution at each time step is evaluated with respect to the whole predicted sequence.",
"Experiments on IWSLT translation tasks show that our approach improves BLEU compared to maximum likelihood and scheduled sampling baselines.",
"In addition, our approach is simpler to train with no need for sampling schedule and yields models that achieve larger improvements with smaller beam sizes.",
"1 1 Introduction Neural machine translation (NMT) models are typically trained to maximize the likelihood of reference translations (Sutskever et al., 2014; Bahdanau et al., 2015).",
"While simple and effective, this objective suffers from the exposure bias problem (Ranzato et al., 2015): the model is only exposed to reference target sequences during training, but has to rely on its own predictions at inference.",
"As a result, errors can accumulate along the generated sequence at inference time.",
"This is a well-known issue in sequential decision making (Langford and Zadrozny, 2005; Cohen and Carvalho, 2005; Kaariainen and Langford, 2006, i.a.) and it has been addressed in past work by incorporating the previous decoding choices into the training scheme, using imitation learning (Daume et al., 2009; Ross et al., 1 The code is available at https://github.com/ Izecson/saml-nmt 2011; Bengio et al., 2015; Leblond et al., 2018) and reinforcement learning (Ranzato et al., 2015; Bahdanau et al., 2016) techniques.",
"In this paper, we focus on a simple and computationally inexpensive family of approaches, known as Data as Demonstrator (Venkatraman et al., 2015) and scheduled sampling (Bengio et al., 2015; Goyal et al., 2017).",
"The algorithms use a stochastic mixture of the reference words and model predictions with an annealing schedule controlling the mixture probability.",
"Despite their empirical success in various sequence prediction tasks, they are based on an assumption that does not hold for machine translation: they assume that words in the reference translations and in sampled sequences are aligned at each time step, which results in weak and sometimes misleading training signals.",
"In this paper, we introduce a differentiable sampling algorithm that exposes machine translation models to their own predictions during training, and allows for differences in word order when comparing model outputs with reference translations.",
"We compute the probability that the reference can be aligned with the sampled output using a soft alignment predicted based on the model states, so that the model will not be punished too severely for producing hypotheses that deviate from the reference, as long as the hypotheses can still be aligned with the reference.",
"Experiments on three IWSLT tasks (German-English, English-German and Vietnamese-English) show that our approach significantly improves BLEU compared to both maximum likelihood and scheduled sampling baselines.",
"We also provide evidence that our approach addresses exposure bias by decoding with varying beam sizes, and show that our approach is simpler to train than scheduled sampling as it requires no annealing schedule.",
"Our approach is designed to optimize the standard sequence-to-sequence model for translating a source sentence x into a target sentence y (Bah-danau et al., 2015).",
"This model computes the probability of y given x as: P ( y | x ) = T (cid:89) t =1 p ( y t | y <t , x ; ) (1) where represents the model parameters.",
"Given x , the model first produces a sequence of hidden representations h 1 ...T : h t = f ( y <t , x ) , where T is the length of y , and f is usually an encoder-decoder network.",
"At each time step t , the hidden representation h t is fed to a linear projection layer s t = W h t + b to obtain a vector of scores s t over all possible words in the vocabulary V .",
"Scores are then turned into a conditional probability distribution: p ( | y <t , x ; ) = softmax( s t ) .",
"The traditional maximum likelihood (ML) objective maximizes the log-likelihood of the training data D { ( x ( n ) , y ( n ) ) } Nn =1 consisting of N pairs of source and target sentences: JML ( ) = N (cid:88) n =1 T (cid:88) t =1 log p ( y ( n ) t | y ( n ) <t , x ( n ) ; ) (2) At test time, prefixes y <t are subsequences generated by the model and therefore contain errors.",
"By contrast, in ML training, prefixes y <t are subsequences of reference translations.",
"As a result, the model is never exposed to its own errors during training and errors accumulate at test time.",
"This mismatch is known as the exposure bias problem (Ranzato et al., 2015).",
"Bengio et al. (2015) introduced the scheduled sampling algorithm to address exposure bias.",
"Scheduled sampling gradually replaces the reference words with sampled model predictions in the prefix used at training time.",
"An annealing schedule controls the probability of using reference words vs. model predictions.",
"The training objective remains the same as the ML objective, except for the nature of the prefix y <t , which contains a mixture of reference and predicted words: JSS ( ) = N (cid:88) n =1 T (cid:88) t =1 log p ( y ( n ) t | y ( n ) <t , x ( n ) ; ) (3) Despite the empirical success of scheduled sampling, one limitation is that the discontinuity of the argmax operation makes it impossible to penalize errors made in previous steps, which can lead to slow and unstable training.",
"We address this issue using a continuous relaxation to the greedy search and sampling process, similarly to Goyal et al. (2017), which we describe in Section 2.2.",
"Another limitation of scheduled sampling is that it incorrectly assumes that the reference and predicted sequence are aligned by time indices which introduces additional noise to the training signal.",
"2 We address this problem with a novel differentiable sampling algorithm with an alignment based objective called soft aligned maximum likelihood (SAML).",
"It is used in combination with maximum likelihood to define our training objective J = JML + JSAML , where JML is computed based on reference translations, and JSAML is computed based on sampled translations of the same input sentences.",
"We define JSAML in Section 2.3.",
"To backpropagate errors made in the previous decoding steps, we use a continuous relaxation of the discrete sampling operation similar to Goyal et al. (2017), except that we use the Straight-Through (ST) Gumbel-Softmax estimator (Jang et al., 2017; Bengio et al., 2013) instead of Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2014) to better simulate the scenario at inference time.",
"3 The Gumbel-Softmax is derived from the Gumbel-Max trick (Maddison et al., 2014), an algorithm for sampling one-hot vector z R k from a categorical distribution ( p 1 , ..., p k ) : z = one-hot (arg max i (log p i + g i )) (4) where g i is the Gumbel noise drawn i.i.d from Gumbel (0 , 1) 4 , and is a hyperparameter controlling the scale of the noise.",
"Here, the trick is used to approximate the discontinuous argmax function with the differentiable softmax: z = softmax((log p i + g i ) / ) (5) 2 https://nlpers.blogspot.com/2016/03/a-dagger-by-any-other-name-scheduled.html 3 The Straight-Through estimator consistently outperforms the Gumbel-Softmax in preliminary experiments.",
"4 g i = log( log( u i )) and u i Uniform(0 , 1) .",
"where is the temperature parameter.",
"As diminishes to zero, z becomes the same as one-hot sample z .",
"The Straight-Through Gumbel-Softmax maintains the differentiability of the Gumbel-Softmax estimator while allowing for discrete sampling by taking different paths in the forward and backward pass.",
"It uses argmax to get the one-hot sample z in the forward pass, but uses its continuous approximation z in the backward pass.",
"While ST estimators are biased, they have been shown to work well in latent tree learning (Choi et al., 2018) and semi-supervised machine translation (Niu et al., 2019).",
"The soft aligned maximum likelihood (SAML) is defined as the probability that the reference can be aligned with the sampled output using a soft alignment predicted by the model:",
"where T is the length of the reference sequence, T (cid:48) is the length of the sampled sequence, a tj is the predicted soft alignment between the reference word y t and sampled prefix y <j .",
"maximizing: JSAML ( ) = N (cid:88) n =1 log PSAML ( y ( n ) | x ( n ) ) (7) The conditional probability of the next word p ( y t | y <j , x ; ) is computed as follows: p ( | y <j , x ; ) = softmax( W h j + b ) (8)",
"where W and b are model parameters.",
"h j is the hidden representation at step j conditioned on the Task sentences (K) vocab (K) train dev test src tgt de-en 153.3 7.0 6.8 113.5 53.3 vi-en 121.3 1.5 1.3 23.9 50.0 Table 1: We evaluate on two translation tasks.",
"source sequence x and the preceding words y <j sampled from the model distribution using differentiable sampling: h j = f ( y <j , x ) (9) We compute the soft alignment a tj between y t and y <j based on the model's hidden states: a tj = exp( score ( h j , e y t )) (cid:80) T (cid:48) i =1 exp( score ( h i , e y t )) (10) where e y t is the embedding of the reference word y t .",
"The score function captures the similarity between the hidden state h j and the embedding e y t .",
"We use the dot product here as it does not introduce additional parameters: score ( h , e ) = h (cid:62) e (11) Figure 1 illustrates how the resulting objective differs from scheduled sampling: (1) it is computed over sampled sequences as opposed to sequences that contain a mixture of sampled and reference words, and (2) each reference word is soft-aligned to the sampled sequence.",
"Data We evaluate our approach on IWSLT 2014 German-English (de-en) as prior work (Goyal et al., 2017), as well as two additional tasks: IWSLT 2014 English-German (en-de) and IWSLT",
"2015 Vietnamese-English (vi-en).",
"For de-en and en-de, we follow the preprocessing steps in Ran-zato et al. (2015).",
"For vi-en, we use the data preprocessed by Luong and Manning (2015), with test2012 for validation and test2013 for testing.",
"Table 1 summarizes the data statistics.",
"Setup Our translation models are attentional RNNs (Bahdanau et al., 2015) built on Sockeye (Hieber et al., 2017).",
"We use bi-directional LSTM encoder and single-layer LSTM decoder with 256 hidden units, embeddings of size 256, and multilayer perceptron attention with a layer size of 256.",
"We apply layer normalization (Ba et al., 2016) and label smoothing (0.1).",
"We add dropout to embeddings (0.1) and decoder hidden states (0.2).",
"For ST Gumbel-Softmax, we use temperature = 1 and noise scale = 0 .",
"5 .",
"The decoding beam size is 5 unless stated otherwise.",
"We train the models using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 1024 words.",
"We checkpoint models every 1000 updates.",
"The initial learning rate is 0.0002, and it is reduced by 30% after 4 checkpoints without validation perplexity improvement.",
"Training stops after 12 checkpoints without improvement.",
"For training efficiency, we first pre-train a baseline model for each task using only JML and fine-tune it using different approaches.",
"In the fine-tuning phase, we inherit all settings except that we initialize the learning rate to 0.00002 and set the minimum number of checkpoints before early stopping to 24.",
"We fine-tune each randomly seeded model independently.",
"Baselines We compare our model against three baselines: (1) a standard baseline trained with the ML objective, and models fine-tuned with (2) scheduled sampling ( SS ) (Bengio et al., 2015) and (3) differentiable scheduled sampling ( DSS ) (Goyal et al., 2017).",
"In SS and DSS, the probability of using reference words (cid:15) s is annealed using inverse sigmoid decay (Bengio et al., 2015): (cid:15) s = k/ ( k + exp( i/k )) at the i -th checkpoint with k = 10 .",
"Results Table 2 shows that the SAML improves over the ML baseline by +0.5 BLEU on de-en, +0.7 BLEU on en-de, and +1.0 BLEU on vi-en task.",
"In addition, SAML consistently improves over both the scheduled sampling and differentiable scheduled sampling on all tasks.",
"All improvements are significant with p < 0 .",
"002 .",
"Interestingly, differentiable scheduled sampling performs no better than scheduled sampling in our experiments, unlike in Goyal et al. (2017).",
"Unlike scheduled sampling, our approach does not require an annealing schedule, and it is therefore simpler to train.",
"We verify that the annealing schedule is needed in scheduled sampling by training a contrastive model with the same objective as scheduled sampling, but without annealing schedule (Table 2).",
"We set the sampling rate to 0.5.",
"The contrastive model hurts BLEU scores by at least 4.0 points compared to both the ML baseline and models fine-tuned with scheduled sampling, con-firming that scheduled sampling needs the annealing schedule to work well.",
"We further examine the performance gain of different approaches over the baseline with varying beam sizes (Figure 2).",
"Our approach yields larger BLEU improvements when decoding with greedy search and smaller beams, while there is no clear pattern for scheduled sampling models.",
"These results support the hypothesis that our approach mitigates exposure bias, as it yields bigger improvements in settings where systems have fewer opportunities to recover from early errors.",
"Daume et al. (2009) first addressed exposure bias in an imitation learning framework by training a classifier on examples generated using a mixture of the ground truth and the model's current predictions.",
"DAgger (Ross et al., 2011) is a similar algorithm which differs in how the training examples are generated and aggregated.",
"Both al--0.2 0 0.2 0.4 0.6 0.8 1 1 2 3 4 5 BLEU Improvement B e a m S i z e DSS SS SAML",
"gorithms require an expert policy, which produces the best next token given any model predicted prefix, and assume that policy can be efficiently computed from the reference.",
"However, for structured prediction tasks such as machine translation with large vocabulary and complex loss functions, it is intractable to find the best next token given any prefix.",
"For time series modeling, the Data as Demonstrator algorithm (Venkatraman et al., 2015) derives the expert policy directly from the reference sequences which are aligned with the sampled sequences at each time step.",
"Scheduled sampling algorithms (Bengio et al., 2015; Goyal et al., 2017) use the same strategy to train neural sequence-to-sequence models for a broader range of language generation tasks, even though the time alignment between reference and sampled sequences does not hold.",
"Leblond et al. (2018) proposed to complete a predicted prefix with all possible reference suffixes and picking the reference suffix that yields the highest BLEU-1 score.",
"However, they found that this approach performs well only when the prefix is close to the reference.",
"Reinforcement learning (RL) algorithms (Bah-danau et al., 2016; Sutton and Barto, 2018; Van Hasselt et al., 2016) address exposure bias by directly optimizing a sentence-level reward for the model generated sequences.",
"Evaluation metrics such as BLEU can be used as rewards, but they are discontinuous and hard to optimize.",
"Techniques such as policy gradient (Williams, 1992) and actor-critic (Sutton and Barto, 2018; Degris et al., 2012) are thus required to find an unbiased estimation of the gradient to optimize the model.",
"Due to the high variance of the gradient estimation, training with RL can be slow and unstable (Henderson et al., 2018; Wu et al., 2018).",
"Recent alternatives use data augmentation to incorporate the sentence-level reward into the training objective more efficiently (Norouzi et al., 2016).",
"Finally, our SAML loss shares the idea of flexi-ble reference word order with the bag-of-word loss introduced by Ma et al. (2018) to improve source coverage.",
"However, their loss is computed with teacher forcing and therefore does not address exposure bias.",
"We introduced a differentiable sampling algorithm which exposes a sequence-to-sequence model to its own predictions during training and compares them to reference sequences flexibly to backpropagate reliable error signals.",
"By soft aligning reference and sampled sequences, our approach consistently improves BLEU over maximum likelihood and scheduled sampling baselines on three IWSLT tasks, with larger improvements for greedy search and smaller beam sizes.",
"Our approach is also simple to train, as it does not require any sampling schedule.",
"We thank the anonymous reviewers, Amr Sharaf, Naomi Feldman, Hal Daume III and the CLIP lab at UMD for helpful comments.",
"This research is supported in part by an Amazon Web Services Machine Learning Research Award and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract #FA8650-17-C-9117.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
] | [
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"result",
"method",
"other",
"other",
"other",
"other"
] |
[
"Disease is one of the fundamental entities in biomedical research.",
"Recognizing such entities from biomedical text and then normalizing them to a standardized disease vocabulary offer a tremendous opportunity for many downstream applications.",
"Previous studies have demonstrated that joint modeling of the two sub-tasks has superior performance than the pipelined counterpart.",
"Although the neural joint model based on multi-task learning framework has achieved state-of-the-art performance, it suffers from the boundary inconsistency problem due to the separate decoding procedures.",
"Moreover, it ignores the rich information (e.g., the text surface form) of each candidate concept in the vocabulary, which is quite essential for entity normalization.",
"In this work, we propose a neural transition-based joint model to alleviate these two issues.",
"We transform the end-to-end disease recognition and normalization task as an action sequence prediction task, which not only jointly learns the model with shared representations of the input, but also jointly searches the output by state transitions in one search space.",
"Moreover, we introduce attention mechanisms to take advantage of the text surface form of each candidate concept for better normalization performance.",
"Experimental results conducted on two publicly available datasets show the effectiveness of the proposed method.",
"Disease is one of the fundamental entities in biomedical research, thus it is one of the most searched topics in the biomedical literature (Do-gan et al., 2009) and the internet (Brownstein et al., 2009).",
"Automatically identifying diseases mentioned in a text (e.g., a PubMed article or a health webpage) and then normalizing these identified mentions to their mapping concepts in a standardized disease vocabulary (e.g., with primary name, synonyms and definition, etc.) offers a tremendous opportunity for many downstream applications, such as mining chemical-disease relations from the literature (Wei et al., 2015), and providing much more relevant resources based on the search queries (Dogan et al., 2014), etc.",
"Examples of such disease vocabularies includes MeSH (http://www.nlm.nih.gov/mesh/) and OMIM (http://www.ncbi.nlm.nih.gov/omim).",
"Previous studies (Leaman and Lu, 2016; Lou et al., 2017; Zhao et al., 2019) show the effectiveness of the joint methods for the end-to-end disease recognition and normalization (aka linking) task to alleviated the error propagation problem of the traditional pipelined solutions (Strubell et al., 2017; Leaman et al., 2013; Xu et al., 2016, 2017).",
"Although TaggerOne (Leaman and Lu, 2016) and the discrete transition-based joint model (Lou et al., 2017) successfully alleviate the error propagation problem, they heavily rely on hand-craft feature engineering.",
"Recently, Zhao et al. (Zhao et al., 2019) proposes a neural joint model based on the multi-task learning framework (i.e., MTL-feedback) which significantly outperforms previous discrete joint solutions.",
"MTL-feedback jointly shares the representations of the two sub-tasks (i.e., joint learning with shared representations of the in-put), however, their method suffers from the boundary inconsistency problem due to the separate decoding procedures (i.e., separate search in two different search spaces).",
"Moreover, it ignores the rich information (e.g., the text surface form) of each candidate concept in the vocabulary, which is quite essential for entity normalization.",
"In this work, we propose a novel neural transition-based joint model named NeuJoRN for disease named entity recognition and normalization, to alleviate these two issues of the multi-task learning based solution (Zhao et al., 2019).",
"We transform the end-to-end disease recognition and normalization task as an action sequence prediction task.",
"More specifically, we introduce four types of actions (i.e., OUT, SHIFT, REDUCE, SEGMENT) for the recognition purpose and one type of action (i.e., LINKING) for the normalization purpose.",
"Our joint model not only jointly learns the model with shared representations, but also jointly searches the output by state transitions in one search space.",
"Moreover, we introduce attention mechanisms to take advantage of text surface form of each candidate concept for better linking action prediction.",
"We summarize our contributions as follows.",
"We propose a novel neural transition-based joint model, NeuJoRN, for disease named entity recognition and normalization, which not only jointly learns the model with shared representations, but also jointly searches the output by state transitions in one search space.",
"We introduce attention mechanisms to take advantage of text surface form of each candidate concept for normalization performance.",
"We evaluate our proposed model on two public datasets, namely the NCBI and BC5CDR datasets.",
"Extensive experiments show the effectiveness of the proposed model.",
"We define the end-to-end disease recognition and normalization task as follows.",
"Given a sentence x from a document d ( e.g., a PubMed abstract) and a controlled vocabulary KB ( e.g., MeSH and OMIM) which consists of a set of disease concepts, the task of end-to-end disease recognition and normalization is to identify all disease mentions M = { m 1 , m 2 , ..., m | M | } mentioned in x and to link each of the identified disease mention m i with its mapping concept c i in KB, m i c i .",
"If there is no mapping concept in KB for m i , then m i NIL , where NIL denotes that m i is un-linkable.",
"We first introduce the transition system used in the model, and then introduce the neural transition-based joint model for this task.",
"parser (Watanabe and Sumita, 2015; Lample et al., 2016), which constructs the output of each given sentence x and controlled vocabulary KB through state transitions with a sequence of actions A .",
"We define a state as a tuple ( , , O ) , which consists of the following three structures: stack ( ): the stack is used to store tokens being processed.",
"buffer ( ): the buffer is used to store tokens to be processed.",
"output ( O ): the output is used to store the recognized and normalize mentions.",
"We define a start state with the stack and the output O being both empty, and the buffer containing all the tokens of a given sentence x .",
"Similarly, we define an end state with the stack and buffer being both empty, and the output O saving the recognized and normalized entity mention.",
"The transition system begins with a start state and ends with an end state.",
"The state transitions are accomplished by a set of transition actions A , which consume the tokens in and build the output O step by step.",
"As shown in Table 1, we define 5 types of transition actions for state transitions, and their logics are summarized as follows: OUT pops the first token 0 from the buffer , which indicates that this token does not belong to any entity mention.",
"SHIFT moves the first token 0 from the buffer to the stack , which indicates that this token is part of an entity mention.",
"REDUCE pops the top two tokens (or spans) 0 and 1 from the stack and concatenates them as a new span, which is then pushed back to the stack .",
"SEGMENTt pops the top token (or span) 0 from the stack and creates a new entity mention t 0 with entity type t , which is then added to the output .",
"LINKINGc links the previous recognized but unnormalized mention t 0 in the output with its mapping concept with id c and updates the mention with t,c 0 .",
"Table 2 shows an example of state transitions for the recognition and normalization of disease mentions given a sentence Most colon cancers arise from mutations and a controlled vocabulary MeSH.",
"State 0 is the start state where denotes that the stack and output O are initially empty, and the buffer is initialized with all the tokens of the given sentence.",
"State 9 is the end state where denotes that the stack and buffer are finally empty, and colon cancers disease,D 003110 in the output O denote that the mention colon can-cers is a disease mention and is normalized to the concept with id D003110 in MeSH.",
"More specifi-cally, state 5 creates a new disease mention colon cancers disease and add it to the output .",
"State 6 links the previous recognized but unnormalized disease mention in the output with its mapping concept with id D003110 in MeSH.",
"Based on the introduced transition system, the end-to-end disease recognition and normalization task becomes a new sequence to sequence task, i.e., the action sequence prediction task.",
"The input is a sequence of words x n 1 = ( w 1 , w 2 , ..., w n ) and a controlled vocabulary KB, and the output is a sequence of actions A m 1 = ( a 1 , a 2 , ..., a m ) .",
"The goal of the task is to find the most probable output action sequence A given the input word sequence x n 1 and KB, that is A = arg max A p ( A m 1 | x n 1 , KB ) (1) Formally, at each step t , the model predicts the next action based on the current state S t and the action history A t 1 1 .",
"Thus, the task is models as ( A , S ) = argmax A,S (cid:89) t p ( a t , S t +1 | A t 1 1 , S t ) (2) where a t is the generated action at step t , and S t +1 is the new state according to a t .",
"Let r t denote the representation for computing the probability of the action a t at step t , thus p ( a t | r t ) = exp ( w (cid:124) a t r t + b a t ) a (cid:48) A ( S t ) exp ( w (cid:124) a (cid:48) r t + b a (cid:48) ) (3) where w a and b a denote the learnable parameter vector and bias term, respectively, and A ( S t ) denotes the next possible valid actions that may be taken given the current state S t .",
"Finally, the overall optimization function of the action sequence prediction task can be written as ( A , S ) = argmax A,S (cid:89) t p ( a t , S t +1 | A t 1 1 , S t ) = argmax A,S (cid:89) t p ( a t | r t ) (4) 3.3 Dense Representations We now introduce neural networks to learn the dense representations of an input sentence x and each state in the whole transition process to predict the next action.",
"Input Representation We represent each word x i in a sentence x by concatenating its character-level word representation, non-contextual word representation, and contextual word representation: x i = [ v chari ; v wi ; ELMo i ] (5) where v chari denotes its character-level word representation learned by using a CNN network (Ma and Hovy, 2016), v wi denotes its non-contextual word representation initialized with Glove (Pennington et al., 2014) embeddings, which is pre-trained on 6 billion words from Wikipedia and web text, and ELMo i denotes its contextual word representation initialized with ELMo (Peters et al., 2018).",
"We can also explore the contextual word representation from BERT (Devlin et al., 2018) by averaging the embeddings of the subwords of each word.",
"We leave it to the future work.",
"State Representation At each step t in the transition process, let's consider the representation of the current state S t = ( t , t , A t ) , where t = ( ..., 1 , 0 ) , t = ( 0 , 1 , ... ) and A t = ( a t 1 , a t 2 , ... ) .",
"The buffer t is represented with BiLSTM (Graves et al., 2013) to represent the words in the buffer: b t = BiLSTM ([ 0 , 1 , ... ]) (6) The stack t and the actions A t are represented with StackLSTM (Dyer et al., 2015): s t = StackLSTM ([ ..., 1 , 0 ]) a t = StackLSTM ([ a t 1 , a t 2 , ... ]) (7) We classify all the actions defined in Table 1 into two categories corresponding to two different purposes, i.e., the recognition and normalization purposes.",
"OUT, SHIFT, REDUCE, SEGMENTt are used for the recognition purpose, and LINKINGc is used for the normalization purpose.",
"As shown in Figure",
"1(a) and",
"1(b), we define two different state representations for predicting the actions in different purposes.",
"Specifically, for predicting the actions in the recognition purpose, we represent the state as r NERt = ReLU ( W [ s 1 t ; s 0 t ; b 0 t ; a 1 t ] + d ) (8) where ReLU is an activation function, W and d denote the learnable parameter matrix and bias term, respectively, and s 0 t and s 1 t denote the first and second representations of the stack .",
"b 0 t denotes the first representation of the buffer .",
"a 1 t denotes the last representation of the action history A .",
"For predicting the actions in the normalization purpose, we represent the state as r NORMt = ReLU ( W [ l (cid:48) m ; r (cid:48) m ; m (cid:48) ; c (cid:48) ; c ; a 1 t ] + d ) (9) where ReLU is an activation function, W and d denote the learnable parameter matrix and bias term, respectively, and l (cid:48) m and r (cid:48) m denotes the left-side and right-side context representations by",
"(i) first applying attention with the concept representation c to highlight the relevant parts in mentions' local context, and",
"(ii) then applying max-pooling operation to aggregate the reweighted representations of all the context words.",
"m (cid:48) and c (cid:48) are the representations of the mention and candidate concept by applying CoAttention mechanism (Tay et al., 2018; Jia et al., 2020).",
"c denotes the candidate concept representation by",
"(i) first run a BiLSTM (Graves et al., 2013) to derive the contextual representation of each word in the candidate concept, and",
"(ii) then applying max-pooling operation to aggregate the representations of all concept words.",
"a 1 t denotes the last representation of the action history A .",
"Decoding is the key step in both training and test, which is to search for the best output structure ( i.e., action sequence) under the current model parameters.",
"In this work, we use two different search strategies with different optimizations.",
"Greedy Search For efficient decoding, a widely-used greedy search algorithm (Wang et al., 2017) can be adopted to minimize the negative log-likelihood of the local action classifier in Equation (3, 8, 9).",
"Beam Search The main drawback of greedy search is error propagation (Wang et al., 2017).",
"An incorrect action will fail the following actions, leading to an incorrect output sequence.",
"One solution to alleviate this problem is to apply beam search.",
"In this work, we use the Beam-Search Optimization (BSO) method with LaSO update (Wiseman and Rush, 2016) to train our beam-search model, where the max-margin loss is adopted.",
"We use two public available datasets in this study, namely NCBI the NCBI disease corpus (Dogan et al., 2014) and BC5CDR the BioCreative V CDR task corpus (Li et al., 2016b).",
"NCBI dataset contains 792 PubMed abstracts, which was split into 692 abstracts for training and development, and 100 abstracts for testing.",
"A disorder mention in each PubMed abstract was manually annotated with its mapping concept identifier in the MEDIC Table 3: Overall statistics of the datasets.",
"lexicon.",
"BC5CDR dataset contains 1,500 PubMed abstracts, which was equally split into three parts for training, development and test, respectively.",
"A disease mention in each abstract is manually annotated with the concept identifier to which it refers to a controlled vocabulary.",
"In this study, we use the July 6, 2012 version of MEDIC, which contains 7,827 MeSH identifiers and 4,004 OMIM identifiers, grouped into 9,664 disease concepts.",
"Table3 show the overall statistics of the two datasets.",
"To facilitate the generation of candidate linking actions, we perform some preprocessing steps of each candidate mention and each concept in KB with the following strategies:",
"(i) Spelling Correction for each candidate mention in the datasets, we replace all the misspelled words using a spelling check list as in previous work (D'Souza and Ng, 2015; Li et al., 2017).",
"(ii) Abbreviation Resolution we use Ab3p (Sohn et al., 2008) toolkit to detect and replace the abbreviations with their long forms within each document and also expand all possible abbreviated disease mentions using a dictionary collected from Wikipedia as in previous work (D'Souza and Ng, 2015; Li et al., 2017).",
"(iii) Numeric Synonyms Resolutions we replace all the numerical words in the mentions and concepts to their corresponding Arabic numerals as in previous work (D'Souza and Ng, 2015; Li et al., 2017).",
"We generate candidate linking actions ( i.e., candidate concepts) for each mention with the commonly used information retrieval based method, which includes the following two steps.",
"We first index all the concept names and training mentions with their concept ids.",
"Then, the widely-used BM25 model provided by Lucene is employed to retrieve the top 10 candidate concepts { c i } 10 i =1 for each mention m .",
"Following previous work (Leaman and Lu, 2016; Lou et al., 2017; Zhao et al., 2019), we utilize the evaluation kit 1 for evaluating the model performances.",
"We report F1 score for the recognition task at the mention level, and F1 score for the normalization task at the abstract level.",
"We use the AdamW optimizer (Loshchilov and Hutter, 2019) for parameter optimization.",
"Most of the model hyper-parameters are listed in Table 4.",
"Since increasing the beam size will increase the decoding time, we only report results with beam size 1, 2, and 4.",
"Table 5 shows the overall comparisons of different models for the end-to-end disease named entity recognition and normalization task.",
"The first part shows the performance of different pipelined methods for the task.",
"DNorm (Leaman et al., 2013) is a traditional method, which needs feature engineering.",
"IDCNN (Strubell et al., 2017) is a neural model based on BiLSTM-CRF, which requires few effort of feature engineering.",
"The second part 1 http://www.biocreative.org/tasks/biocreative-v/track-3-cdr shows the performance of different joint models for the task.",
"TaggerOne (Leaman et al., 2013) is a joint solution based on semi-CRF.",
"Transition-based Model (Lou et al., 2017) is a joint solution based on discrete transition-based method.",
"Both of these two models rely heavily on feature engineering.",
"MTL-feedback (Zhao et al., 2019) is neural joint solution based on multi-task learning.",
"NeuJoRN is our neural transition-based joint model for the whole task.",
"From the comparisons, we find that (1) IDCNN does not perform well enough although it relies few efforts of feature engineering.",
"(2) All the joint models significantly outperform the pipelined methods.",
"(3) The deep-learning based joint models significantly outperform the traditional machine learning based methods.",
"(4) Our proposed NeuJoRN outperforms MTL-feedback by at least 0.57% and 0.59% on the recognition and normalization tasks, respectively.",
"Table 6 shows the comparisons of different search strategies of our proposed NeuJoRN.",
"From the results, we find that (1) The methods based on beam search strategies outperforms the greedy search strategy, which indicates that the beam search solutions could alleviate the error propagation problem of the greedy search solution.",
"(2) The model with beam size 4 achieves the best performance.",
"The larger the beam size, the better the performance, however the lower the decoding speed.",
"(3) Our greedy search based solution doesn't outperform the MLT-feedback method.",
"Table 7 shows the effectiveness of the proposed attention mechanisms.",
"When we remove the attention mechanism for representing the left-side and right-side local context, the performance dropped a little bit.",
"However, when we remove the CoAttention mechanism, which is used for directly modeling the matching between the mention and candidate concept, the performance dropped significantly.",
"This group of comparisons indicates that importance of the matching between the mention and candidate concept for the entity normalization task.",
"Disease Named Entity Recognition DNER has been widely studied in the literature.",
"Most previous studies (Leaman et al., 2013; Xu et al., 2015, 2016) transform this task as a sequence labeling task, and conditional random fields (CRF) based methods are widely adopted to achieve good performance.",
"However, these methods heavily rely on hand-craft feature engineering.",
"Recently, neural models such as BiLSTM-CRF based methods (Strubell et al., 2017; Wang et al., 2019) and BERT-based methods (Kim et al., 2019) have achieved state-of-the-art performance.",
"Disease Named Entity Normalization DNEN has also been widely studied in the literature.",
"Most studies assume that the entity mentions are pre-detected by a separate DNER model, and focus on developing methods to improve the normaliation accuracy (Lou et al., 2017), resulting in developing rule-based methods (D'Souza and Ng, 2015), machine learning-based methods (Leaman et al., 2013; Xu et al., 2017), and recent deep learning-based methods (Li et al., 2017; Ji et al., 2020; Wang et al., 2020; Vashishth et al., 2021; Chen et al., 2021).",
"However, the pipeline architecture which performs DNER and DNEN separately suffers from the error propagation problem.",
"In this work, we propose a neural joint model to alleviate this issue.",
"Joint DNER and DNEN Several studies (Leaman and Lu, 2016; Lou et al., 2017; Zhao et al., 2019) show the effectiveness of the joint methods to alleviated the error propagation problem.",
"Although TaggerOne (Leaman and Lu, 2016) and the discrete transition-based joint model (Lou et al., 2017) successfully alleviated the error propagation problem, they heavily rely on hand-craft feature engineering.",
"Recently, Zhao et al. (Zhao et al., 2019) propose a neural joint model based on the multi-task learning framework (i.e., MTL-feedback) which significantly outperforms previous discrete joint solutions.",
"However, their method suffers from the boundary inconsistency problem due to the separate decoding procedures (i.e., separate search in two different search spaces).",
"Moreover, it ignores the rich information (e.g., the text surface form) of each candidate concept in the vocabulary, which is quite essential for entity normalization.",
"In this work, we propose a neural joint model to alleviate these two issues.",
"Transition-based Models Transition-based models are widely used in parsing and translation (Watanabe and Sumita, 2015; Wang et al., 2018; Meng and Zhang, 2019).",
"Recently, these models are successfully applied to information extraction tasks, such as joint POS tagging and dependency parsing (Yang et al., 2018), joint entity and relation extraction (Li and Ji, 2014; Li et al., 2016a; Ji et al., 2021).",
"Several studies propose discrete transition-based joint model for entity recognition and normalization(Qian et al., 2015; Ji et al., 2016; Lou et al., 2017).",
"In this work, we propose a neural transition-based joint model for disease named entity recognition and normalization.",
"0.8868 0.8803 0.8964 0.8779 0.8729 0.8853",
"Zongcheng Ji, Omid Ghiasvand, Stephen Wu, and Hua Xu.",
"2021.",
"A Discrete Joint Model for Entity and Relation Extraction from Clinical Notes.",
"In AMIA 2021 Informatics Summit .",
"Zongcheng Ji, Aixin Sun, Gao Cong, and Jialong Han.",
"2016.",
"Joint Recognition and Linking of Fine-Grained Locations from Tweets.",
"In WWW , pages 12711281.",
"Zongcheng Ji, Qiang Wei, and Hua Xu.",
"2020.",
"BERT-based Ranking for Biomedical Entity Normalization.",
"In AMIA 2020 Informatics Summit , pages 269277.",
"Ningning Jia, Xiang Cheng, Sen Su, and Liyuan Ding.",
"2020.",
"CoGCN: Combining co-attention with graph convolutional network for entity linking with knowledge graphs.",
"Expert Systems , page e12606.",
"Donghyeon Kim, Jinhyuk Lee, Chan Ho So, Hwisang Jeon, Minbyul Jeong, Yonghwa Choi, Wonjin Yoon, Mujeen Sung, and Jaewoo Kang.",
"2019.",
"A Neural Named Entity Recognition and Multi-Type Normalization Tool for Biomedical Text Mining.",
"IEEE Access , 7:7372973740.",
"Guillaume Lample, Miguel Ballesteros, Sandeep Sub-ramanian, Kazuya Kawakami, and Chris Dyer.",
"2016.",
"Neural Architectures for Named Entity Recognition.",
"In NAACL , pages 260270, San Diego, California.",
"Association for Computational Linguistics.",
"Robert Leaman, Rezarta Islamaj Dogan, and Zhiyong Lu.",
"2013.",
"DNorm: disease name normalization with pairwise learning to rank.",
"Bioinformatics , 29:2909 2917.",
"Robert Leaman and Zhiyong Lu.",
"2016.",
"TaggerOne: joint named entity recognition and normalization with semi-Markov Models.",
"Bioinformatics , 32(18):28392846.",
"Fei Li, Yue Zhang, Meishan Zhang, and Donghong Ji.",
"2016a.",
"Joint models for extracting adverse drug events from biomedical text.",
"In IJCAI , pages 2838 2844.",
"Haodi Li, Qingcai Chen, Buzhou Tang, Xiaolong Wang, Hua Xu, Baohua Wang, and Dong Huang.",
"2017.",
"CNN-based ranking for biomedical entity normalization.",
"BMC Bioinformatics , 18(11):385.",
"Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu.",
"2016b.",
"BioCreative V CDR task corpus: a resource for chemical disease relation extraction.",
"In this work, we proposed a novel neural transition-based joint model for disease named entity recognition and normalization.",
"Experimental results conducted on two public available datasets show the effectiveness of the proposed method.",
"In the future, we will apply this joint model to more different types of datasets, such as the clinical notes, drug labels, and tweets, etc."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"objective",
"objective",
"method",
"objective",
"objective",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method"
] |
[
"Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews.",
"Most recent efforts adopt attention-based neural network models to implicitly connect aspects with opinion words.",
"However, due to the complexity of language and the existence of multiple aspects in a single sentence, these models often confuse the connections.",
"In this paper, we address this problem by means of effective encoding of syntax information.",
"Firstly, we de-fine a unified aspect-oriented dependency tree structure rooted at a target aspect by reshaping and pruning an ordinary dependency parse tree.",
"Then, we propose a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction.",
"Extensive experiments are conducted on the SemEval 2014 and Twitter datasets, and the experimental results confirm that the connections between aspects and opinion words can be better established with our approach, and the performance of the graph attention network (GAT) is significantly improved as a consequence.",
"Aspect-based sentiment analysis (ABSA) aims at fine-grained sentiment analysis of online affective texts such as product reviews.",
"Specifi-cally, its objective is to determine the sentiment polarities towards one or more aspects appearing in a single sentence.",
"An example of this task is, given a review great food but the service was dreadful , to determine the polarities towards the aspects food and service .",
"Since the two aspects express quite opposite sentiments, just assigning a sentence-level sentiment polarity is inappropriate.",
"In this regard, ABSA can provide better insights into user reviews compared with sentence-level sentiment analysis.",
"Corresponding author.",
"Intuitively, connecting aspects with their respective opinion words lies at the heart of this task.",
"Most recent efforts (Wang et al., 2016b; Li et al., 2017; Ma et al., 2017; Fan et al., 2018) resort to assorted attention mechanisms to achieve this goal and have reported appealing results.",
"However, due to the complexity of language morphology and syntax, these mechanisms fail occasionally.",
"We illustrate this problem with a real review So delicious was the noodles but terrible vegetables , in which the opinion word terrible is closer to the aspect noodles than delicious , and there could be terrible noodles appearing in some other reviews which makes these two words closely associated.",
"Therefore, the attention mechanisms could attend to terrible with a high weight when evaluating the aspect noodles .",
"Some other efforts explicitly leverage the syntactic structure of a sentence to establish the connections.",
"Among them, early attempts rely on handcrafted syntactic rules (Qiu et al., 2011; Liu et al., 2013), though they are subject to the quantity and quality of the rules.",
"Dependency-based parse trees are then used to provide more comprehensive syntactic information.",
"For this purpose, a whole dependency tree can be encoded from leaves to root by a recursive neural network (RNN) (Lakkaraju et al., 2014; Dong et al., 2014; Nguyen and Shirai, 2015; Wang et al., 2016a), or the internal node distance can be computed and used for attention weight decay (He et al., 2018a).",
"Recently, graph neural networks (GNNs) are explored to learn representations from the dependency trees (Zhang et al., 2019; Sun et al., 2019b; Huang and Carley, 2019).",
"The shortcomings of these approaches should not be overlooked.",
"First, the dependency relations, which may indicate the connections between aspects and opinion words, are ignored.",
"Second, empirically, only a small part of the parse tree is related to this 3230 task and it is unnecessary to encode the whole tree (Zhang et al., 2018; He et al., 2018b).",
"Finally, the encoding process is tree-dependent, making the batch operation inconvenient during optimization.",
"In this paper, we re-examine the syntax information and claim that revealing task-related syntactic structures is the key to address the above issues.",
"We propose a novel aspect-oriented dependency tree structure constructed in three steps.",
"Firstly, we obtain the dependency tree of a sentence using an ordinary parser.",
"Secondly, we reshape the dependency tree to root it at a target aspect in question.",
"Lastly, pruning of the tree is performed to retain only edges with direct dependency relations with the aspect.",
"Such a unified tree structure not only enables us to focus on the connections between aspects and potential opinion words but also facilitates both batch and parallel operations.",
"Then we propose a relational graph attention network (R-GAT) model to encode the new dependency trees.",
"R-GAT generalizes graph attention network (GAT) to encode graphs with labeled edges.",
"Extensive evaluations are conducted on the SemEval 2014 and Twitter datasets, and experimental results show that R-GAT significantly improves the performance of GAT.",
"It also achieves superior performance to the baseline methods.",
"The contributions of this work include: We propose an aspect-oriented tree structure by reshaping and pruning ordinary dependency trees to focus on the target aspects.",
"We propose a new GAT model to encode the dependency relations and to establish the connections between aspects and opinion words.",
"The source code of this work is released for future research.",
"1 2 Related Work Most recent research work on aspect-based sentiment analysis (ABSA) utilizes attention-based neural models to examine words surrounding a target aspect.",
"They can be considered an implicit approach to exploiting sentence structure, since opinion words usually appear not far from aspects.",
"Such approaches have led to promising progress.",
"Among them, Wang et al. (2016b) proposed to use an attention-based LSTM to identify important sentiment information relating to a target aspect.",
"Chen et al. (2017) introduced a multi-layer attention mechanism to capture long-distance opinion words for aspects.",
"For a similar purpose, Tang et al. (2016) employed Memory Network with multi-hop attention and external memory.",
"Fan et al. (2018) proposed a multi-grained attention network with both fine-grained and coarse-grained attentions.",
"The pre-trained language model BERT (Devlin et al., 2018) has made successes in many classification tasks including ABSA.",
"For example, Xu et al. (2019) used an additional corpus to post-train BERT and proved its effectiveness in both aspect extraction and ABSA.",
"Sun et al. (2019a) converted ABSA to a sentence-pair classification task by constructing auxiliary sentences.",
"Some other efforts try to directly include the syntactic information in ABSA.",
"Since aspects are generally assumed to lie at the heart of this task, establishing the syntactic connections between each target aspect and the other words are crucial.",
"Qiu et al. (2011) manually defined some syntactic rules to identify the relations between aspects and potential opinion words.",
"Liu et al. (2013) obtained partial alignment links with these syntactic rules and proposed a partially supervised word alignment model to extract opinion targets.",
"Afterward, neural network models were explored for this task.",
"Lakkaraju et al. (2014) used a recursive neural network (RNN) to hierarchically encode word representations and to jointly extract aspects and sentiments.",
"In another work, Wang et al. (2016a) combined the recursive neural network with conditional random fields (CRF).",
"Moreover, Dong et al. (2014) proposed an adaptive recursive neural network (AdaRNN) to adaptively propagate the sentiments of words to the target aspect via semantic composition over a dependency tree.",
"Nguyen et al. (2015) further combined the dependency and constituent trees of a sentence with a phrase recursive neural network (PhraseRNN).",
"In a simpler approach, He et al. (2018a) used the relative distance in a dependency tree for attention weight decay.",
"They also showed that selectively focusing on a small subset of context words can lead to satisfactory results.",
"Recently, graph neural networks combined with dependency trees have shown appealing effectiveness in ABSA.",
"Zhang et al. (2019) and Sun et al. (2019b) proposed to use graph convolutional networks (GCN) to learn node representations from a dependency tree and used them together with 3231 I like the [recipe] pos here.",
"other features for sentiment classification.",
"For a similar purpose, Huang and Carley (2019) used graph attention networks (GAT) to explicitly establish the dependency relationships between words.",
"However, these approaches generally ignore the dependency relations which might identify the connections between aspects and opinion words.",
"In this section, we elaborate on the details of constructing an aspect-oriented dependency tree.",
"The syntactic structure of a sentence can be uncovered by dependency parsing, a task to generate a dependency tree to represent the grammatical structure.",
"The relationships between words can be denoted with directed edges and labels.",
"We use three examples to illustrate the relationships among aspect, attention and syntax in ABSA, as shown in Figure 1.",
"In the first example, the word like is used as a verb and it expresses a positive sentiment towards the aspect recipe , which is successfully attended by the attention-based LSTM model.",
"However, when it is used as a preposition in the second example, the model still attends to it with a high weight, resulting in a wrong prediction.",
"The third example shows a case where there are two aspects in a single sentence with different sentiment polarities.",
"For the aspect chicken , the LSTM model mistakenly assigns high attention weights to the words but and dried , which leads to another prediction mistake.",
"These examples demonstrate the limitations of the attention-based model in this task.",
"Such mistakes are likely to be avoided by introducing explicit syntactic relations between aspects and other words.",
"For example, it might be different if the model noticed the direct dependency relationship between chicken and fine in the third example, rather than with but .",
"The above analysis suggests that dependency relations with direct connections to an aspect may assist a model to focus more on related opinion words, and therefore should be more important than other relations.",
"Also, as shown in Figure 1, a dependency tree contains abundant grammar information, and is usually not rooted at a target aspect.",
"Nevertheless, the focus of ABSA is a target aspect rather than the root of the tree.",
"Motivated by the above observations, we propose a novel aspect-oriented dependency tree structure by reshaping an original dependency tree to root it at a target aspect, followed by pruning of the tree so as to discard unnecessary relations.",
"Algorithm 1 describes the above process.",
"For an input sentence, we first apply a dependency parser to obtain its dependency tree, where r ij is the dependency relation from node i to j .",
"Then, we build an aspect-oriented dependency tree in three steps.",
"Firstly, we place the target aspect at the root, where multiple-word aspects are treated as entities.",
"Secondly, we set the nodes with direct connections to the aspect as the children, for which the original Reshape and prune Figure 2: Construction of an aspect-oriented dependency tree (bottom) from an ordinary dependency tree (top).",
"dependency relations are retained.",
"Thirdly, other dependency relations are discarded, and instead, we put a virtual relation n:con ( n connected) from the aspect to each corresponding node, where n represents the distance between two nodes.",
"2 If the sentence contains more than one aspect, we construct a unique tree for each aspect.",
"Figure 2 shows an aspect-oriented dependency tree constructed from the ordinary dependency tree.",
"There are at least two advantages with such an aspect-oriented structure.",
"First, each aspect has its own dependency tree and can be less influenced by unrelated nodes and relations.",
"Second, if an aspect contains more than 2 We set n = if the distance is longer than 4.",
"one word, the dependency relations will be aggregated at the aspect, unlike in (Zhang et al., 2019; Sun et al., 2019b) which require extra pooling or attention operations.",
"The idea described above is partially inspired by previous findings (He et al., 2018a; Zhang et al., 2018; He et al., 2018b) that it could be sufficient to focus on a small subset of context words syntactically close to the target aspect.",
"Our approach provides a direct way to model the context information.",
"Such a unified tree structure not only enables our model to focus on the connections between aspects and opinion words but also facilitates both batch and parallel operations during training.",
"The motivation we put a new relation n:con is that existing parsers may not always parse sentences correctly and may miss important connections to the target aspect.",
"In this situation, the relation n:con enables the new tree to be more robust.",
"We evaluate this new relation in the experiment and the results confirm this assumption.",
"To encode the new dependency trees for sentiment analysis, we propose a relational graph attention network (R-GAT) by extending the graph attention network (GAT) (Velickovic et al., 2017) to encode graphs with labeled edges.",
"Dependency tree can be represented by a graph G with n nodes, where each represents a word in the sentence.",
"The edges of G denote the dependency between words.",
"The neighborhood nodes of node i can be represented by N i .",
"GAT iteratively updates each node representation (e.g., word embeddings) 3233 by aggregating neighborhood node representations using multi-head attention: h l +1 att i = || Kk =1 (cid:2) j N i lkij W lk h lj (1) lkij = attention ( i, j ) (2) where h l +1 att i is the attention head of node i at layer l + 1 , || Kk =1 x i denotes the concatenation of vectors from x 1 to x k , lkij is a normalized attention coeffi-cient computed by the k -th attention at layer l , W lk is an input transformation matrix.",
"In this paper, we adopt dot-product attention for attention ( i, j ) .",
"3 4.2 Relational Graph Attention Network GAT aggregates the representations of neighborhood nodes along the dependency paths.",
"However, this process fails to take dependency relations into consideration, which may lose some important dependency information.",
"Intuitively, neighborhood nodes with different dependency relations should have different influences.",
"We propose to extend the original GAT with additional relational heads.",
"We use these relational heads as relation-wise gates to control information flow from neighborhood nodes.",
"The overall architecture of this approach is shown in Figure 3.",
"Specifically, we first map the dependency relations into vector representations, and then compute a relational head as: h l +1 rel i = || Mm =1 (cid:2) j N i lmij W lm h lj (3) g lmij = ( relu ( r ij W m 1 + b m 1 ) W m 2 + b m 2 ) (4) lmij = exp ( g lmij ) (cid:3) N i j =1 exp ( g lmij ) (5) where r ij represents the relation embedding between nodes i and j .",
"R-GAT contains K attentional heads and M relational heads.",
"The final representation of each node is computed by: x l +1 i = h l +1 att i || h l +1 rel i (6) h l +1 i = relu ( W l +1 x l +1 i + b l +1 ) (7) 3 Dot product has fewer parameters but similar performance with feedforward neural network used in (Velickovic et al., 2017).",
"We use BiLSTM to encode the word embeddings of tree nodes, and obtain its output hidden state h i for the initial representation h 0 i of leaf node i .",
"Then, another BiLSTM is applied to encode the aspect words, and its average hidden state is used as the initial representation h 0 a of this root.",
"After applying R-GAT on an aspect-oriented tree, its root representation h la is passed through a fully connected softmax layer and mapped to probabilities over the different sentiment polarities.",
"Finally, the standard cross-entropy loss is used as our objective function:",
"where D contains all the sentence-aspects pairs, A represents the aspects appearing in sentence S , and contains all the trainable parameters.",
"In this section, we first introduce the datasets used for evaluation and the baseline methods employed for comparison.",
"Then, we report the experimental results conducted from different perspectives.",
"Finally, error analysis and discussion are conducted with a few representative examples.",
"Three public sentiment analysis datasets are used in our experiments, two of them are the Laptop and Restaurant review datasets from the SemEval 2014 Task (Maria Pontiki and Manandhar, 2014), 4 and the third is the Twitter dataset used by (Dong et al., 2014).",
"Statistics of the three datasets can be found in Table 1.",
"The Biaffine Parser (Dozat and Manning, 2016) is used for dependency parsing.",
"The dimension of the dependency relation embeddings is set to 300.",
"For R-GAT, we use the 300-dimensional word embeddings of GLoVe (Pennington et al., 2014).",
"For R-GAT+BERT, we use the last hidden states of the pre-trained BERT for word representations and fine-tune them on our task.",
"The PyTorch implementation of BERT 5 is used in the experiments.",
"R-GAT is shown to prefer a high dropout rate in between [0.6, 0.8].",
"As for R-GAT+BERT, it works better with a low dropout rate of around 0.2.",
"Our model is trained using the Adam optimizer (Kingma and Ba, 2014) with the default configuration.",
"A few mainstream models for aspect-based sentiment analysis are used for comparison, including:",
"Syntax-aware models: LSTM+SynATT (He et al., 2018a), AdaRNN (Dong et al., 2014), PhraseRNN (Nguyen and Shirai, 2015), ASGCN (Zhang et al., 2019), CDT (Sun et al., 2019b), GAT (Velickovic et al., 2017) and TD-GAT (Huang and Carley, 2019).",
"Attention-based models: ATAE-LSTM (Wang et al., 2016b) , IAN (Ma et al., 2017), RAM (Chen et al., 2017), MGAN (Fan et al., 2018), attention-equipped LSTM, and fine-tuned BERT (Devlin et al., 2018).",
"4 http://alt.qcri.org/semeval2014/task4/. 5 https://github.com/huggingface/transformers",
"Our methods: R-GAT is our relational graph attention network.",
"R-GAT+BERT is our R-GAT with the BiLSTM replaced by BERT, and the attentional heads of R-GAT will also be replaced by that of BERT.",
"The overall performance of all the models are shown in Table 2, from which several observations can be noted.",
"First, the R-GAT model outperforms most of the baseline models.",
"Second, the performance of GAT can be significantly improved when incorporated with relational heads in our aspect-oriented dependency tree structure.",
"It also outperforms the baseline models of ASGCN, and CDT, which also involve syntactic information in different ways.",
"This proves that our R-GAT is better at encoding the syntactic information.",
"Third, the basic BERT can already outperform all the existing ABSA models by significant margins, demonstrating the power of this large pre-trained model in this task.",
"Nevertheless, after incorporating our R-GAT (R-GAT+BERT), this strong model sees further improvement and has achieved a new state of the art.",
"These results have demonstrated the effectiveness of our R-GAT in capturing important syntactic structures for sentiment analysis.",
"The appearance of multiple aspects in one single sentence is very typical for ABSA.",
"To study the influence of multiple aspects, we pick out the reviews with more than one aspect in a sentence.",
"Each aspect is represented with its averaged (GloVe) word embeddings, and the distance between any two aspects of a sentence is calculated using the Euclidean distance.",
"If there are more than two aspects, the nearest Euclidean distance is used for each aspect.",
"Then, we select three models (GAT, R-GAT, R-GAT+BERT) for sentiment prediction, and plot the aspect accuracy by different distance ranges in Figure 4.",
"We can observe that the aspects with nearer distances tend to lead to lower accuracy scores, indicating that the aspects with high semantic similarity in a sentence may confuse the models.",
"However, with our R-GAT, both GAT and BERT can be improved across different ranges, showing that our method can alleviate this problem to a certain extent.",
"Dependency parsing plays a critical role in our method.",
"To evaluate the impact of different parsers, we conduct a study based on the R-GAT model using two well-known dependency parsers: Stanford Parser (Chen and Manning, 2014) and Biaffine Parser (Dozat and Manning, 2016).",
"6 Table 3 shows the performance of the two parsers in UAS and LAS metrics, followed by their performance for aspect-based sentiment analysis.",
"From the table, 6 The parsers are implemented by Stanford CoreNLP (Man-ning et al., 2014) and AllenNLP (Gardner et al., 2018).",
"we can find that the better Biaffine parser results in higher sentiment classification accuracies.",
"Moreover, it further implies that while existing parsers can capture most of the syntactic structures correctly, our method has the potential to be further improved with the advances of parsing techniques.",
"We further conduct an ablation study to evaluate the influence of the aspect-oriented dependency tree",
"structure and the relational heads.",
"We present the results on ordinary dependency trees for comparison.",
"From table 4, we can observe that R-GAT is improved by using the new tree structure on all three datasets, while GAT is only improved on the Restaurant and Twitter datasets.",
"Furthermore, after removing the virtual relation n:con , the performance of R-GAT drops considerably.",
"We manually examined the misclassified samples and found that most of them can be attributed to poor parsing results where aspects and their opinion words are incorrectly connected.",
"This study validates that adding the n:con relation can effectively alleviate the parsing problem and allows our model to be robust.",
"In this paper, the maximal number of n is set to 4 according to empirical tests.",
"Other values of n are also explored but the results are not any better.",
"This may suggest that words with too long dependency distances from the target aspect are unlikely to be useful for this task.",
"To analyze the limitations of current ABSA models including ours, we randomly select 100 misclassified examples by two models (R-GAT and R-GAT+BERT) from the Restaurant dataset.",
"After looking into these bad cases, we find the reasons behind can be classified into four categories.",
"As shown in Table 5, the primary reason is due to the misleading neutral reviews, most of which include an opinion modifier (words) towards the target aspect with a direct dependency connection.",
"The second category is due to the difficulty in comprehension, which may demand deep language understanding techniques such as natural language inference.",
"The third category is caused by the advice which only recommend or disrecommend people to try, with no obvious clues in the sentences indicating the sentiments.",
"The fourth category is caused by double negation expression, which is also difficult for current models.",
"Through the error analysis, we can note that although current models have achieved appealing progress, there are still some complicated sentences beyond their capabilities.",
"There ought to be more advanced natural language processing techniques and learning algorithms developed to further address them.",
"In this paper, we have proposed an effective approach to encoding comprehensive syntax information for aspect-based sentiment analysis.",
"We first defined a novel aspect-oriented dependency tree structure by reshaping and pruning an ordinary dependency parse tree to root it at a target aspect.",
"We then demonstrated how to encode the new dependency trees with our relational graph attention network (R-GAT) for sentiment classification.",
"Experimental results on three public datasets showed that the connections between aspects and opinion words can be better established with R-GAT, and the performance of GAT and BERT are significantly improved as a result.",
"We also conducted an ablation study to validate the role of the new tree structure and the relational heads.",
"Finally, an error analysis was performed on incorrectly-predicted examples, leading to some insights into this task.",
"The work was supported by the Fundamental Research Funds for the Central Universities (No.19lgpy220) and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2017ZT07X355).",
"Part of this work was done when the first author was an intern at Alibaba."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other"
] |
[
"Easy access, variety of content, and fast widespread interactions are some of the reasons making social media increasingly popular.",
"However, this rise has also enabled the propagation of fake news, text published by news sources with an intent to spread misinformation and sway beliefs.",
"Detecting it is an important and challenging problem to prevent large scale misinformation and maintain a healthy society.",
"We view fake news detection as reasoning over the relations between sources, articles they publish, and engaging users on social media in a graph framework.",
"After embedding this information, we formulate inference operators which augment the graph edges by revealing unobserved interactions between its elements, such as similarity between documents' contents and users' engagement patterns.",
"Our experiments over two challenging fake news detection tasks show that using inference operators leads to a better understanding of the social media framework enabling fake news spread, resulting in improved performance.",
"Over the last decade an increasing number of people access news online (Amy Mitchell, 2016), and use social networking platforms to engage, consume and propagate this content in their social circles.",
"Social networks provide easy means to distribute news and commentary, resulting in a sharp increase in the number of media outlets (Ribeiro et al., 2018), representing a wide range of perspectives and ideologies.",
"However, despite this diversity, content is often shared only among people that hold similar beliefs and ideologies, leading to the formation of highly segregated information communities, often referred to as echo chambers (Gentzkow and Shapiro, 2011; Quat-trociocchi et al., 2016; Dubois and Blank, 2018; Garimella et al., 2018).",
"An unfortunate consequence of this process is the rapid proliferation of fake news (Lazer et al., 2018), content which resembles news in form but lacks the journalistic standards ensuring its quality.",
"Social media platforms are now flooded with inaccurate, incomplete, and intentionally misleading information, which propagates at lightning speed between users sharing an echo-chamber.",
"According to a recent study (Vosoughi et al., 2018) false stories spread six times faster compared to real news stories.",
"Given the volume and speed of fake news spread, manual fact checking organizations cannot be used in real-time to stop it.",
"An alternative, which is arguably easier to scale, is to jointly model the claims and their source and ask who and what can you trust?",
"Answering this question requires modeling the complex information landscape on social media, consisting of news sources, the articles they release and their social context, corresponding to social media users engaging and sharing information in their networks.",
"Similar to previous work (Baly et al., 2020b, 2018) we formulate the problem as associating a discrete factuality level ( high, low , or mixed ) with news content and news sources.",
"These works treat news factuality level assessment as a traditional classification problem, using features based on social media data.",
"Our goal in this paper is to explore a different approach, driven by the principal of social homophily (McPherson et al., 2001), referring to the tendency of individuals to form social ties with others who share their views and preferences.",
"We follow the observation that the political perspectives and biases expressed in the text will be reflected in the behavior of users engaging with it.",
"Together they form information communities , connecting users with each other based on their content preferences, and with sources that provide that content.",
"In this highly connected structure, even a little evidence connecting users' preferences to false narratives can be propagated and help inform the judgements about the sources they follow and 1363 engage with.",
"Unfortunately, much of this rich social information is not directly observed, or due to the volume of these interactions, cannot be fully sampled.",
"To help alleviate this problem and capture this knowledge, we propose a set of inference operators , each augmenting the information graph with different relationships beyond what was initially seen.",
"By iteratively applying these inference operators, we are able to capture more of the hidden relationships that enable the spread of fake news through social media, and are crucial for detecting it.",
"From a technical perspective, we view fake news detection as a reasoning problem over information graphs.",
"We use the evidence provided by our existing knowledge of high vs. low factuality content (i.e., the training data), to assess the factuality of unknown content based on observed and predicted links capturing their connections.",
"This transductive process is done using a Relational Graph Neural Net (R-GCN) (Schlichtkrull et al., 2018), which creates distributed representations of nodes contextualized by the graph structure, allowing us to transfer information from observed evidence nodes to unknown source nodes using graph embedding tasks.",
"We use inference operators, which build on the similarity metric defined by the learned graph embedding, to increase the number of edges connecting the two node types.",
"These two interdependent steps are done iteratively.",
"Based on the observed data in the information graph, we can create an initial graph-contextualized representation for each node via graph embedding training.",
"We can see that based on the current trained model, there are three articles that are similar in content and embedding, and are represented in the figure by sharing a gray background.",
"Two are fake news articles, published by red background low-factuality news sources ( FakeIsUs and In-foWars ), while the right most one is published by a high-factuality source.",
"Assuming the model is not familiar with their source factuality level, then based on the observed graph information, it may not be able to distinguish between them .",
"Thus, in this work, we propose to augment the graph based on learned knowledge, via inference operators.",
"Intuitively, the goal of our inference operators is to provide additional graph edges (shown as dashed red lines), such that the graph-contextualized embeddings would capture the similarity between the two low-factuality articles and the difference compared to the high-factuality one.",
"For example, users engaging with the left two articles follow the same social influencer, who is a high activity user.",
"In the initial graph training, this observed relationship would increase these users' learned node similarity (yellow background) allowing our inference operators to connect them into a strong information community of like-minded users, that was not initially observed, and thus not easily represented by the graph embedding.",
"This newly inferred relationship can be propagated through the information graph, allowing us to have more strong information about other articles/sources/users these newly connected users interact with, thus detecting fake news better.",
"In summary, we make the following contributions: We formulate fake news detection as a reasoning problem over an information graph.",
"We suggest an inference-based graph representation learning approach, which incrementally augments the graph with inferences about users' social information and content preferences.",
"We perform extensive experiments in source-level (Baly et al., 2020a) and content-level (Nguyen et al., 2020) settings, demonstrating our inference-based graph representation approach leads to performance improvements in both cases, even in weakly supervised settings.",
"Fake News Detection : Detecting fake news using social media has been a popular research topic recently.",
"It's typically studied as a supervised learning task, in which a classifier is trained using representations of news and their social context to predict factuality of the content (Hassan et al., 2017; Shu et al., 2017; Shao et al., 2018; Prez-Rosas 1364 et al., 2017; Volkova and Jang, 2018; Shu et al., 2019a; Kim et al., 2019).",
"Unfortunately, these methods cannot capture the interactions between the users and sources that share fake news on social media, which is necessary to better understand the way fake news propagates, and ultimately detect it.",
"Due to the above mentioned limitations, researchers have recently started using Graph Neural Networks (GNNs) (to model graphs), for this task.",
"As they contain social media entities as nodes and link them through edges based on their observed interactions, graphs are able to better capture social context.",
"More specifically, through edge interactions, nodes in graphs can reinforce other nodes' representations, strengthening the overall information quality.",
"Shu et al. was one of the early works, and more recently, Han et al. utilized continual learning with GNNs to capture the propagation cascade of fake news on Twitter.",
"However, unlike our work, these and other graph models do not uncover or model hidden relationships in the data.",
"Most similar to us, Nguyen et al. proposed the Factual News Graph (FANG) also modeling the relationship between sources, articles, and users in a graph framework, by training the model to better capture social context.",
"However, rather than iteratively adding new explicit edges to uncover hidden interactions in the graph as we do by using inference operators, FANG (Nguyen et al., 2020) modified the loss function they used when training the graph to better capture user-user and user-article interactions that already exist.",
"Despite this being effective, it does not model graph interactions that were not observed in the original data, while our approach can uncover these hidden relationships as well, and thus more strongly capture the fake news propagation landscape on social media (the information communities we make explicit can help model other content better).",
"Moreover, our framework allows the graph to be continually enhanced, so we can capture more relationships than were built into the original graph (like source-source), and this leads to us achieving performance improvements over their work.",
"Iterative Graph Learning : Recently, there has also been work on learning to augment graphs, such as by using end-to-end neural models optimized for the final task (Jiang et al., 2019; Chen et al., 2020).",
"While these works do iteratively augment the graph similar to us, by doing it end-to-end they can be prone to be task specific (edges may be created solely for achieving higher classification accuracy), rather than learning a high quality social media representation.",
"This may lead to issues at test time or in inductive settings.",
"In our case, as we are adding edges based on learned graph similarities, we are strengthening the information communities that already exist, while uncovering hidden ones.",
"Further, we can easily control for the relations and amounts of them that are added.",
"We view fake news detection as reasoning over the relations between sources, articles, and engaging users in an information graph.",
"We hypothesize that due to the principle of homophily, social ties leading to the formation of online communities will capture similarities and differences in content preferences within and across communities.",
"We capture the interaction between social information and news content using a heterogeneous graph defined in Sec. 3.1, and use a Relational Graph Convolutional Network (R-GCN), to create vectorized node representations for factuality prediction.",
"The R-GCN defined in Sec. 3.2 allows our model to capture the different social communities, by creating contextualized node representations.",
"For example, an article node is represented using its contents, source, and relationships with users engaging with it (which are also represented using their relationships to other nodes).",
"The success of us capturing the social communities through the R-GCN hinges on having strong social information (i.e., graph edges) to characterize them.",
"Providing this information might not be straight-forward, as collecting social information at scale can be costly and noisy.",
"Instead, we propose inference operators, defined in Sec. 3.3, which augment the graph with new edges, using the similarity between learned nodes representation to assess their compatibility.",
"This allows the R-GCN to enrich each newly connected nodes' contextualized representation, improving factuality classification.",
"In Sec. 3.4 we describe a reasoning framework, which iteratively enriches the graph using inference operators and computes the updated node representations based on the updated graph.",
"The framework is depicted in Fig. 2.",
"Our graph consists of the following nodes: (1) S , the news sources .",
"Each sources' ( s i ) vec-1365 Figure 2: Factuality Prediction as Graph Reasoning tor consists of its Twitter and YouTube profiles embeddings (numerical + LM features details in App. A.2.1).",
"Prior work (Baly et al., 2020b) showed that these features provide a strong signal.",
"(2) A , the articles published by these sources.",
"An article a i vector captures its contents using a SBERT (Reimers and Gurevych, 2019) RoBERTa (Liu et al., 2019) model, as it provides strong, meaningful sentence embeddings.",
"(3) U , the Twitter users that interact with articles and sources, and provide the social context for them.",
"The description is applicable to the source-level (Baly et al., 2020a) and content-level (Nguyen et al., 2020) settings, where elements in S or A , res., are our classification targets.",
"The user vector is identical to the Twitter embedding mentioned above.",
"The graph is formed by first adding the sources as individual nodes.",
"Then, connecting each source with up to 300 articles ( e = { s i , a j } ).",
"Next, we add social context to the graph via Twitter users that interact with sources: (1) Following Sources: We add up to 5000 users that follow sources, connecting each user to new sources they follow ( e = { s i , u j } ).",
"These are likely to indicate a positive relationship.",
"(2) Discussing Articles: We connect each article with users that tweet its title/link within a 3 month period of publication ( e = { a i , u j } ).",
"These users provide the means for fake (and real) news spread, allowing us to model this process.",
"Finally, social interactions, a crucial component for analyzing fake news propagation, are captured by scraping up to 5000 followers of each Twitter user, and connecting existing users with edges if they one follows another.",
"Relational Graph Convolutional Networks (R-GCNs) (Schlichtkrull et al., 2018), that generalize traditional GCNs to handle different relationship types, thus allowing us to better capture their interactions and improve their representation.",
"Intuitively, R-GCNs create contextualized node representations by considering the graph structure through graph convolutions and learn a composition function: h l +1 i = ReLU (cid:80) r R (cid:80) u U r ( v i ) 1 z i,r W lr h lu , where h li is the hidden representation for the i-th node at layer l and h 0 i = v i (output of the node encoder); U r ( v i ) represents v i 's neighboring nodes connected by the relation type r ; z i,r is for normalization; and W lr represents trainable parameters.",
"To obtain meaningful node representation used for capturing factuality, we optimize the Node Classification (NC) objective of Fake News Detection.",
"After obtaining the source representations from the R-GCN, we pass them through the softmax activation function and then train using categorical cross-entropy loss, where the labels are factuality.",
"We define multiple inference operators that enable the creation of new edges based on learned information graph inferences.",
"The different operators capture our intuition about how connecting node pairs of different types would contribute to trustworthiness propagation.",
"For example, pairs of users that are not explicitly connected in the graph (i.e., do not follow each other) but share articles with similar factuality levels may have similar levels of non-trustworthiness.",
"Connecting them would provide more information to the nodes they connect to.",
"One of our inference operators adding user-user edges based on their node similarity in the embedding space captures this situation.",
"For each inference operator type discussed below, we make edge connections based on the node representations we have learned, by computing similarity scores between all pairs of nodes (using the graph node embedding efficiently with FAISS (Johnson et al., 2017)), and connecting the nodes with the top k similarity scores based on our model.",
"The first broad inference operator type adds edges between graph entities, in a similar way as a recommendation engine, suggesting entities to interact with each other, based on their graph relationships.",
"User-Source : We add edges between users and sources ( e = { u i , s j } ), using the top k most similar source/user pairs in the embedding space.",
"User-User : Pairs of users that interact with news in a similar way are connected ( e = { u i , u j } ).",
"These users are likely to have the same beliefs and may even want to follow each other if they became aware of each others' profiles.",
"User-Article : We add edges between articles and users likely to be interested them ( e = { u i , a j } )).",
"This inference can be based on the target users' interactions with similar articles, or with other users sharing these articles.",
"The second broad type connects entities based on content similarity.",
"Unlike the previous set, these types of edges are not initially observed in the graph, which is one of the benefits of our setup, allowing us to add inferences about latent relationships that underlie how information propagates , such as coordination between different sources, information flooding by publishing similar content in multiple articles and bad influencers, consistently propagating low-quality content.",
"Sources-Sources Sources likely to publish similar content at an equivalently factual level are connected ( e = { s i , s j } ).",
"Articles-Articles Articles that could be similar to each other in content are connected ( e = { a i , a j } ).",
"To do this effectively, we first identify articles pairs that discuss the same event, approximated using the publication date and entity mentions overlap (using Flair (Akbik et al., 2018)) in their title.",
"Second, we use an entailment model (Parikh et al., 2016; Gardner et al., 2017) to only connect articles that entail each other, as they are more likely to be talking about similar content.",
"Influencers Fake news is often spread by bad influencers that have a large following.",
"Over the years, Twitter has launched campaigns intended to reduce fake news spread by suspending such users.",
"This inference operator aims to do the same, by following these steps: (1) Using the training data, we label users by counting the paths to sources with a given factuality label.",
"(2) Identify users without significant label variation in their followers group, as potential news influencers.",
"We avoid users with mostly highly factuality followers.",
"(3) At inference time, we connect new users to influencers in this initial set, with a special edge type, indicating similarity to an influencer.",
"We add the top k Model Performance Acc MacroF1 #Edges M1 : Majority class 52.43 22.93 M2 : (Baly et al., 2018) 66.45 61.08 M3 : (Baly et al., 2020b) 71.52 67.25 M4 : Replication of (Baly et al., 2020b) 69.38 63.63 M5 : Node classification (NC) 68.90 63.72 M6 : InfOp Best Model 72.55 66.89 32K Table 1: Results on (Baly et al., 2020b).",
"The inference operators we defined use the graph embedding function to identify new relationships that would potentially improve the embedding quality and allow for better information propagation during learning.",
"The two steps are clearly interdependent.",
"Now, we describe our iterative graph learning framework that builds on this dependency, and continually learns better social context representations in the graph by applying the inference operators, and then retraining the graph.",
"It can be seen in Algorithm 1 and runs the following steps: (1) Initial Representation In this step, we train the graph G using the framework described in Sec 3.2 to get an initial graph representation.",
"(2) Inference Step Apply inference operators (Sec 3.3) based on the learned representation.",
"(3) Learning Step After, we continue the training process for the graph.",
"We continually apply the two steps until convergence, based on development set performance.",
"Additional details about the process are provided in Appendices A and C. When done, we retrain the model based on the final graph uncovered by applying the inference operators.",
"Through this iterative approach, we continually improve our representation of the social media framework that enables fake news propagation, and reveal hidden relationships critical to understanding fake news spread.",
"We evaluate our model's ability to predict fake news better on two challenging tasks: Fake News",
"Source Classification, and Article Classification.",
"To evaluate our model's ability to predict the factuality of news medium, we used the Media Bias/Fact Check dataset (Baly et al., 2018, 2020b).",
"The public dataset consists of 859 sources, each labeled on a 3-point factuality scale: low , mixed , and high .",
"Using the Twitter API 1 , we gather an average of 27 user engagements for each articles (Sec 3.1).",
"Our final graph consists of 69,978 users, 93,191 articles, 164,034 nodes, and 7,196,808 edges.",
"Details about the setup we used when training our graph (chosen using the dev set), and our scraping protocol are in Appendix A. Our code is available 2 .",
"To evaluate fake news article detection, we used the dataset released by (Nguyen et al., 2020), put together from Twitter data using related work on ru-mor classification (Kochkina et al., 2018; Ma et al., 2016) and fake news detection (Shu et al., 2018).",
"For each article, the dataset provides its source and a list of engaged users.",
"We also collected the followers for each user, leading to a graph with 48,895 users, 442 sources, and 1,050 articles.",
"Table 1 shows our results on source classification.",
"We evaluate our models on the average of all 5 data splits released by (Baly et al., 2020b), using 20% of the training set sources as a development set.",
"We report results on accuracy and Macro F1-score.",
"We compare to (Baly et al., 2020b, 2018) ( M2 , 3 ), who to the best of our knowledge achieve the strongest performance on this dataset.",
"As (Baly 1 https://developer.twitter.com/en/docs 2 https://github.com/hockeybro12/ FakeNews_Inference_Operators Model Split Performance AUC # New Edges FANG 90% 75.18 SVM 90% 75.89 NC 90% 83.48 InfOp 90% 85.89 10,000 FANG 70% 72.32 SVM 70% 59.18 NC 70% 73.15 InfOp 70% 77.76 10,000 FANG 50% 71.66 InfOp 50% 73.88 10,000 FANG 30% 70.36 InfOp 30% 72.63 10,000 FANG 10% 66.83 InfOp 10% 67.51 10,000 Table 2: On (Nguyen et al., 2020), we achieve the SOTA on all data splits (% of data used for training).",
"et al., 2020b) did not release the article and social media data they used, we replicate their setup using the data we scraped (and their code), and compare to that as well ( M4 ).",
"Despite us optimizing their model, our results are worse than their released performance, so we hypothesize that their data on our setup may lead to better overall performance.",
"When training our initial graph with only observed data using the Node Classification (NC) fake news loss and the same data as our replication of (Baly et al., 2020b), we obtain similar performance to their approach ( M5 vs M4 ).",
"When we apply our inference operators, and then train the graph identically (as in M5 ), we notice a 3 .",
"65% acc.",
"improvement ( M5 vs M6 ), showing the clear benefit of our inference operator setup on this task , and answering our research question that the added information helps.",
"Further, this setup achieves the state-of-the-art on (Baly et al., 2020b), exceeding both our replication with the same data (by 3.17% acc.) and their published results (by 1.03% acc.).",
"Our results for article classification are in Tab 2.",
"We compare to (Nguyen et al., 2020) (FANG), who to the best of our knowledge have the best performance, and compared to several competitive baselines in their work (Ruchansky et al., 2017).",
"Nguyen et al. are also the most similar to us (as said in Sec 2, they also train GNN's), but they do not make unobserved interactions explicit, rather they modify the loss function they used when training to better capture them.",
"Our setup is identical to the strong (Nguyen et al., 2020) setup (we use their 1368 released data and data splits) except we use different Twitter and Article representations, and we also consider Twitter follower edges.",
"In addition, FANG (Nguyen et al., 2020) considered temporal aspects of how tweets propagate, which we do not, and we hypothesize that this may improve our performance.",
"For this reason, we are using less data compared to FANG, apart from the fact that we consider Twitter user followers.",
"For proper comparison, we also evaluate our representations by training a SVM, and in App.",
"Tab 8, we evaluate our model with the same representations as FANG.",
"We evaluate all of their data splits in Tab 2 (90% -> 90% of data for training, 10% for test, etc.).",
"NC evaluates our model performance on the observed data.",
"We show the best results (extended results and details in the App. C), and as can be seen, applying inference operators also improves performance on fake news article classification on all data splits (as much as 4.61% AUC), reinforcing that explicitly learning and creating unobserved relationships in the graph enables us to detect fake news content better.",
"Also, we achieve SOTA by average 4.26%.",
"In this section, we analyze our best model with inference operators (Table 1 M6 ) for fake news source detection (Baly et al., 2020b) by answering the following research questions: (1) Ablation study: What is the contribution of each inference operator?",
"(2) Can our model learn on limited data?",
"Does our inference-based representation help?",
"(3) Can we learn meaningful user communities?",
"(4) What type of inferences does our model make for each inference operator?",
"(5) Can we detect the factuality of new content?",
"(6) What embeddings do we learn?",
"(App. B.1) (7) How many edges should we add for each inf.",
"operator?",
"(App. B.2) (8) How long does running inference operators take?",
"(App. B.3) 5.1 Ablation Study In Table 3, we evaluate each of our inference operators, trained using our joint learning and inference algorithm for up to two iterations.",
"To evaluate the accuracy of the edge connections we make when applying the inference operators (Inf. Acc), we compare the labels of the two nodes connected by an inferred edge (i.e., accurate decisions connect nodes with similar labels).",
"Since labels are only associated in our data with sources, we define a heuristic for computing labels for article and user nodes based on the most common gold label in all the sources they were directly connected to in the initial graph (ex: a user that follows 3 high factuality sources is assigned a high factuality label).",
"We also report the number of edges connected in each setup (dev set for all params).",
"We note that almost all of our models with inference operators result in performance improvements over the baselines (Tab 1 M4 , 5 ), showing that capturing these hidden relations and making them explicit with new edges helps in fake news detection.",
"Moreover, several of our inference operators (users-users/sources-sources) achieve high accuracy, while all perform better than random, showing that we can make useful edge connections after learning the initial information graph.",
"Furthermore, applying multiple inference operators in multiple iterations through our setup ( InfOp Users-Users and Users-Articles) leads to the strongest performance on this task.",
"To evaluate the potential of our approach, we also evaluate our performance if we had no inaccurate edge predictions (i.e. 100% Inf. Acc), and see significant improvements.",
"Note that this is a potential of our setup, as it involves using all the data (including the training set) to determine the user labels and then filtering out inaccurate edge predictions.",
"We suggest a first step, based on probabilistic inference (Pacheco and Goldwasser, 2021), described in the factor graph in Fig. 3, applied to the Users-Users operators.",
"We define two decision variable types, F associated with a user's factuality prediction, and E associated with the inference operator outcome on a user pair.",
"Each is associated with a scoring function, 1 scoring users factuality assignments, and 2 scoring user pairs based on embedding similarity.",
"The assignments are con-1369 Model Performance Acc Macro F1 Std Acc.",
"nected using two sets of constraints: C , ensuring factuality label consistency in users connected via a predicted edge, and T , ensuring transitivity across pairs of edges, sharing a node.",
"We use MAP inference to identify the solution edge set.",
"The results in Tab.",
"3, show a modest improvement (72.17) compared to local inference (71.97), obtained using significantly less edges (6.7K compared 30K).",
"This experimentally shows a benefit of using global probabilistic inference to more intelligently determine edges to connect, rather than only using embedding similarity as we did before (here we also considered user factuality, and other decision vari-ables/scoring functions can be added in the future).",
"Details and other potential benefits of this setup are provided in Appendix D. 5.2 Weakly Supervised Training Next, in Table 4 we evaluate our model on using limited training data for fake news source classification, by training on a smaller set of sources (still using the entire graph and full test set).",
"Here, we see the ability of our inference operators to strongly improve performance (NC vs InfOp ), as they reveal relationships the model could not learn otherwise in the weakly supervised setting, which shows how our system could be useful for detecting recently published news.",
"Now, we analyze how well our user-user inference operator allows us to learn information communities of users (Tab 5).",
"To do this, we cluster (K-means, Tab 5 shows different values of K) users before ( B1 ) and after the inference operators are applied ( B2 ), and evaluate the cluster purity based on user labels.",
"To compute purity, each cluster is assigned to the class which is most frequent in the cluster, and then the accuracy of this is measured.",
"We assigned labels to users using the same heuristic described in Sec. 5.1.",
"We see that the users cluster better after the inference operators are applied (even via bias labels), showing our ability to use them to form information communities.",
"Here, we analyze the inference operators by analyzing specific edge connections that are made.",
"We see that the model makes smart choices in connecting nodes that may be part of the same information community.",
"For example (more in Appendix B.4), a low factuality article discussing Democrats as dangerous open border fanatics' was connected to a user with bio BuildtheWall ... DEMONRATS. 5.5 Incorporating New News Content Finally, we evaluate how well our model can incorporate unseen news content, by clustering (like before) 1500 fact-checked claims from PolitiFact 3 . In Table 5, we first cluster the initial RoBERTa em-3 https://www.politifact.com 1370 beddings of these claims ( C1 ) and then add them into the graph by connecting them to five graph articles that have similar embeddings to them ( C2 ). Next, we use our user-article inference operator to connect each of these articles to users ( C3 ). It's clear that the RoBERTa embeddings statements don't cluster well by factuality. However, once they are added into the graph ( C2 ) and after they are connected via inference operators ( C3 ), they do. This further shows how our framework, especially through inference operators, allows the better detection of unseen news content (in this case claims). Not only can we determine its factuality, but we can also determine what other users are likely to interact with it. 6 Summary and Future Work We propose an approach for tackling fake news detection by continually improving social context representations. To achieve this, we developed an iterative representation learning and inference framework that learns an initial graph embedding, and then applies different inference operators to reveal hidden relationships in the graph. We continually capture more knowledge about the social dynamics that allow fake news to propagate. We showed strong performance on fake news detection, across several datasets and settings. Our current work looks at increasing the accuracy of the inference operators by adding external knowledge. We began exploring this direction by using an entailment model to infer article relationships using content similarity. We also explore additional ways to jointly model inference operators and capture the dependencies between them. We believe this work helps pave the way for further research connecting text analysis along with its social context (Pujari and Goldwasser, 2021; Hovy and Yang, 2021; Pacheco and Goldwasser, 2021; Yang et al., 2016), a natural fit for many NLP tasks. Acknowledgements We thank the anonymous reviewers of this paper for all of their vital feedback. This work was partially supported by an NSF CAREER award IIS-2048001. 7 Ethics Statement To the best of our knowledge no code of ethics was violated throughout the experiments done in this paper. We reported all hyper-parameters and other technical details necessary to reproduce our results, and release the code and dataset we collected. We evaluated our model on two different datasets and tasks (source and article fake news classification), but it is possible that results may differ on other datasets. However, we feel our methodology is solid and applies to any social media fake news setting. For space constraint we moved some of the technical details and discussion to the Appendix section. The results we reported supports our claims in this paper and we believe it is reproducible. Any qualitative result we report is an outcome from a machine learning model that does not represent the authors' personal views.",
"For any results that we discuss on the data we use, we will not include account information and all results will be anonymous.",
"In our dataset release for (Baly et al., 2020b), we include sources, users, and articles.",
"Sources are public information provided in (Baly et al., 2020b), and we map each to an ID.",
"We scraped up to 300 articles for each source (as many as we could), which we map to an ID.",
"We also scraped users that interact with articles, which we also release.",
"Each user is given by their Twitter ID (which may be invalid or not provided if the user deleted their profile), and their graph ID.",
"The Twitter ID of the Tweet the user propagates about the article is also given.",
"We also scraped users that follow sources, and this information is released by providing the user ID's that interact with each source ID's.",
"Finally, we provide the representations for each user, article, and source we used as our initial embedding in the graph.",
"Our data is meant for academic research purposes and should not be used outside of academic research contexts.",
"All our data is in English.",
"Our framework in general does not create direct societal consequence and is intended to be used to defend against fake news.",
"While our model could be used to build better fake news spreaders, our approach of identifying information communities through inference operators, could also prevent against that."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"objective",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"result",
"method",
"result",
"abstain",
"method",
"result",
"method",
"result",
"other",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method"
] |
[
"Creating effective visualization is an important part of data analytics.",
"While there are many libraries for creating visualizations, writing such code remains difficult given the myriad of parameters that users need to provide.",
"In this paper, we propose the new task of synthesizing visualization programs from a combination of natural language utterances and code context.",
"To tackle the learning problem, we introduce PLOTCODER , a new hierarchical encoder-decoder architecture that models both the code context and the input utterance.",
"We use PLOTCODER to first determine the template of the visualization code, followed by predicting the data to be plotted.",
"We use Jupyter notebooks containing visualization programs crawled from GitHub to train PLOTCODER .",
"On a comprehensive set of test samples from those notebooks, we show that PLOTCODER correctly predicts the plot type of about 70% samples, and synthesizes the correct programs for 35% samples, performing 3-4.5% better than the baselines.",
"1 1 Introduction Visualizations play a crucial role in obtaining insights from data.",
"While a number of libraries (Hunter, 2007; Seaborn, 2020; Bostock et al., 2011) have been developed for creating visualizations that range from simple scatter plots to complex 3D bar charts, writing visualization code remains a difficult task.",
"For instance, drawing a scatter plot using the Python matplotlib library can be done using both the scatter and plot methods, and the scatter method (Matplotlib, 2020) takes in 2 required parameters (the values to plot) along with 11 other optional parameters (marker type, color, etc), with some parameters having numeric types (e.g., the size of each marker) and some being 1 Our code and data are available at https://github.",
"arrays (e.g., the list of colors for each collection of the plotted data, where each color is specified as a string or another array of RGB values).",
"Looking up each parameter's meaning and its valid values remains tedious and error-prone, and the multitude of libraries available further compounds the difficulty for developers to create effective visualizations.",
"In this paper, we propose to automatically synthesize visualization programs using a combination of natural language utterances and the programmatic context that the visualization program will reside (e.g., code written in the same file as the visualization program to load the plotted data), focusing on programs that create static visualizations (e.g., line charts, scatter plots, etc).",
"While there has been prior work on synthesizing code from natural language (Zettlemoyer and Collins, 2012; Oda et al., 2015; Wang et al., 2015; Yin et al., 2018), and with addition information such as database schemas (Zhong et al., 2017; Yu et al., 2018, 2019b,a) or input-output examples (Polo-sukhin and Skidanov, 2018; Zavershynskyi et al., 2018), synthesizing general-purpose code from natural language remains highly difficult due to the ambiguity in the natural language input and complexity of the target.",
"Our key insight in synthesizing visualization programs is to leverage their properties: they tend to be short, do not use complex programmatic control structures (typically a few lines of method calls without any control flow or loop constructs), with each method call restricted to a single plotting command (e.g., scatter , pie ) along with its parameters (e.g., the plotted data).",
"This influences our model architecture design as we will explain.",
"To study the visualization code synthesis problem, we use the Python Jupyter notebooks from the JuiCe dataset (Agashe et al., 2019), where each notebook contains the visualization program and its programmatic context.",
"These notebooks are crawled from GitHub and written by various programmers, thus a main challenge is understanding the complexity and the noisiness of real-world programmatic contexts and the huge variance in the quality of natural language comments.",
"Unfortunately, using standard LSTM-based models and Transformer architectures (Vaswani et al., 2017) fails to solve the task, as noted in prior work (Agashe et al., 2019).",
"We observe that while data to be plotted is usually stored in pandas dataframes (Pandas, 2020), they are not explicitly annotated in JuiCe.",
"Hence, unlike prior work, we augment the programmatic context with dataframe names and their schema when available in predicting the plotted data.",
"We next utilize our insight above and design a hierarchical deep neural network code generation model called PLOTCODER that decomposes synthesis into two subtasks: generating the plot command, then the parameters to pass in given the command.",
"PLOTCODER uses a pointer network architecture (Vinyals et al., 2015), which allows the model to directly select code tokens in the previous code cells in the same notebook as the plotted data.",
"Meanwhile, inspired by the schema linking techniques proposed for semantic parsing with structured inputs, such as text to SQL tasks (Iyer et al., 2017; Wang et al., 2019a; Guo et al., 2019), PLOTCODER 's encoder connects the embedding of the natural language descriptions with their corresponding code fragments in previous code cells within each notebook.",
"Although the constructed links can be noisy because the code context is less structured than the database tables in text-to-SQL problems, we observe that our approach results in substantial performance gain.",
"df['Weight_kg'].describe()df['Color'].value_counts().plot(kind='bar')df['Body_Style'].value_counts().plot(kind='bar')grouped=df.groupby(['Body_Style','hasGender',]).mean()df.groupby('Color')['Attack'].mean()df.groupby('Color')['Pr_Male'].mean()df.sort_values('Catch_Rate',ascending=False).head() Dataframe Schema df: ['Catch_Rate', 'Speed', 'Weight_kg', 'Color', 'Body_Style'] Ground Truth plt.scatter(df['Catch_Rate'], df['Speed']) plt.plot(x,y) 002",
"credit['Age'].values Ground Truth plt.scatter(age, duration) Prediction plt.scatter(duration, age) 003",
"We evaluate PLOTCODER 's ability to synthesize visualization programs using Jupyter notebooks of homework assignments or exam solutions.",
"On the gold test set where the notebooks are official solutions, our best model correctly predicts the plot types for over 80% of samples, and precisely predicts both the plot types and the plotted data for over 50% of the samples.",
"On the more noisy test splits with notebooks written by students, which may include work-in-progress code, our model still achieves over 70% plot type prediction accuracy, and around 35% accuracy for generating the entire code, showing how PLOTCODER 's design decisions improve our prediction accuracy.",
"Explore the relationship between rarity and a skill of your choice.",
"Choose one skill (Attack',Defense' or Speed') and do the following.",
"Use the scipy package to assess whether Catch Rate predicts the skill.",
"Create a scatterplot to visualize how the skill depends upon the rarity of the pokemon.",
"Overlay a best fit line onto the scatterplot.",
"Local Code Context slope, intercept, r_value, p_value, std_err = linregress(df['Catch_Rate'], df['Speed'],) x = np.arange(256) y = slope * x + intercept Distant Dataframe Context",
"(a) Natural Language Create a scatter plot of the observations in the credit' dataset for the attributes Duration' and Age' (age should be shown on the xaxis).",
"Local Code Context duration =",
"(b) Natural Language Like in Q9, let's start by thinking about two dice Local Code Context results = [] for i in range(1,7): for j in range(1,7): print ((i,j),max(i,j)) results.append(max(i,j)) Ground Truth & Prediction plt.hist(results) 006 1 Figure 1: An example of plot code synthesis problem studied in this work.",
"Given the natural language, code context within a few code cells from the target code, and other code snippets related to dataframes, PLOTCODER synthesizes the data visualization code.",
"While the input specification only includes the natural language for most tasks, prior work also uses additional information for program prediction, including database schemas and contents for SQL query synthesis (Zhong et al., 2017; Yu et al., 2018, 2019b,a), input-output examples (Polosukhin and Skidanov, 2018; Zavershynskyi et al., 2018), and code context (Iyer et al., 2018; Agashe et al., 2019).",
"There has also been work on synthesizing data manipulation programs only from input-output examples (Drosos et al., 2020; Wang et al., 2017).",
"In this work, we focus on synthesizing visualization code from both natural language description and code context, and we construct our benchmark based on the Python Jupyter notebooks from the JuiCe (Agashe et al., 2019).",
"Compared to JuiCe's input format, we also annotate dataframe schema if available, which is especially important for visualization code synthesis.",
"Prior work has studied generating plots from other specifications.",
"Falx (Wang et al., 2019b, 2021) synthesizes plots from input-output examples, but do not use any learning technique, and focuses on developing a domain-specific language for plot generation instead.",
"In (Dibia and Demiralp, 2019), the authors apply a standard LSTM-based sequence-to-sequence model with attention for plot generation, but the model takes in only raw data to be visualized with no natural language input.",
"The visualization code synthesis problem studied in our work is much more complex, where both the natural language and the code context can be long, and program specifications are implicit and ambiguous.",
"Our design of hierarchical program decoding is inspired by prior work on sketch learning for program synthesis, where various sketch representations have been proposed for different applications (Solar-Lezama, 2008; Murali et al., 2018; Dong and Lapata, 2018; Nye et al., 2019).",
"Compared to other code synthesis tasks, a key difference is that our sketch representation distinguishes between dataframes and other variables, which is important for synthesizing visualization code.",
"Our code synthesis problem is also related to code completion, i.e., autocompleting the program given the code context (Raychev et al., 2014; Li et al., 2018; Svyatkovskiy et al., 2020).",
"However, standard code completion only requires the model to generate a few tokens following the code context, rather than entire statements.",
"In contrast, our task requires the model to synthesize complete and executable visualization code.",
"Furthermore, unlike standard code completion, our model synthesizes code from both the natural language description and code context.",
"Nevertheless, when the prefix of the visualization code is given, our model could also be used for code completion, by including the given partial code as part of the code context.",
"We now discuss our problem setup of synthesizing visualization code in programmatic context, where the model input includes different types of specifications.",
"We first describe the model inputs, then introduce our code canonicalization process to make it easier to train our models and evaluate the accuracy, and finally our evaluation metrics.",
"We illustrate our program specification in Figure 1, which represents a Jupyter notebook fragment.",
"Our task is to synthesize the visualization code given the natural language description and code from the preceding cells.",
"To do so, our model takes in the following inputs: The natural language description for the visualization, which we extract from the natural language markdown above the target code cell containing the gold program in the notebook.",
"The local code context, defined as a few code cells that immediately precede the target code cell.",
"The number of cells to include is a tunable hyper-parameter to be described in Section 5.",
"The code snippets related to dataframe manipulation that appear before the target code cell in the notebook, but are not included in the local code context.",
"We refer to such code as the distant dataframe context.",
"When such context contains code that uses dataframes, they are part of the model input by default.",
"As mentioned in Section 1, unlike JuiCe, we also extract the code snippets related to dataframes, and annotate the dataframe schemas according to their syntax trees.",
"As shown in Figure 1, knowing the column names in each dataframe is important for our task, as dataframes are often used for plotting.",
"One way to train our models is to directly utilize the plotting code in Jupyter notebooks as the ground truth.",
"However, due to the variety of plotting APIs and coding styles, such a model rarely predicts exactly the same code as written in Jupyter notebooks.",
"For example, there are at least four ways in Matplotlib (and similar in other libraries) to create a scatter plot for columns y' against x' from a dataframe df : plt.scatter(df['x'],df['y']) , plt.plot(df['x'],df['y'],'o') , df.plot.scatter(x='x',y='y') , df.plot(kind='scatter',x='x',y='y') .",
"Moreover, given that the natural language description is ambiguous, many plot attributes are hard to precisely predict.",
"For example, from the context shown in Figure 1, there are many valid ways to specify the plot title, the marker style, axis ranges, etc.",
"In our experiments, we find that when trained on raw target programs, fewer than 5% predictions are exactly the same as the ground truth, and a similar phenomenon is also observed earlier (Agashe et al., 2019).",
"Therefore, we design a canonical representation for plotting programs, which covers the core of plot generation.",
"Specifically, we convert the plotting code into one of the following templates: LIB.PLOT TYPE(X, { Y } ) , where LIB is a plotting library, and PLOT TYPE is the plot type to be created.",
"The number of arguments may vary for different PLOT TYPE , e.g., 1 for histograms and pie charts, and 2 for scatter plots.",
"L 0 \\ n L 1 \\ n ...",
"L m , where each L i is a plotting command in the above template, and \\ n are separators.",
"For example, when using plt as the library (a commonly used abbreviation of matplotlib.pyplot ), we convert df.plot(kind='scatter',x='x',y='y') into plt.scatter(df['x'],df['y']) , where LIB = plt and PLOT TYPE = scatter .",
"Plotting code in other libraries could be converted similarly.",
"The tokens that represent the plotted data, i.e., X and Y , are annotated in the code context as follows: VAR , when the token is a variable name, e.g., x and y in Figure 1. DF , when the token is a Pandas dataframe or a Python dictionary, e.g., df in Figure 1. STR , when the token is a column name of a dataframe, or a key name of a Python dictionary, such as Catch Rate' and Speed' in Figure 1. The above annotations are used to cover different types of data references.",
"For example, a column in a dataframe is usually referred to as DF[STR] , and sometimes as DF[VAR] where VAR is a string.",
"In Section 4.2, we will show how to utilize these annotations for hierarchical program decoding, where our decoder first generates a program sketch that predicts these token types without the plotted data, then predicts the actual plotted data subsequently.",
"Plot type accuracy.",
"To compute this metric, we categorize all plots into several types, and a prediction is correct when it belongs to the same type as the ground truth.",
"In particular, we consider the following categories: (1) scatter plots (e.g., generated by plt.scatter ); (2) histograms (e.g., generated by plt.hist ); (3) pie charts (e.g., generated by plt.pie ); (4) a scatterplot overlaid by a line (e.g., such as that shown in Figure 1, or generated by sns.lmplot ); (5) a plot including a kernel density estimate (e.g., plots generated by sns.distplot or sns.kdeplot ); and (6) others, which are mostly plots generated by plt.plot .",
"data to plot as the ground truth.",
"Unless otherwise specified, the ordering of variables must match the ground truth as well, i.e., swapping the data used to plot x and y axes result in different plots.",
"Program accuracy.",
"We consider a predicted program to be correct if both the plot type and plotted data are correct.",
"As discussed in Section 3.2, we do not evaluate the correctness of other plot attributes because they are mostly unspecified.",
"In this section, we present PLOTCODER , a hierarchical model architecture for synthesizing visualization code from natural language and code context.",
"PLOTCODER includes an LSTM-based encoder (Hochreiter and Schmidhuber, 1997) to jointly embed the natural language and code context, as well as a hierarchical decoder that generates API calls and selects data for plotting.",
"We provide an overview of our model architecture in Figure 2. 4.1 NL-Code Context Encoder PLOTCODER 's encoder computes a vector representation for each token in the natural language description and the code context, where the code context is the concatenation of the code snippets describing dataframe schemas and the local code cells, as described in Section 3.1.",
"NL encoder.",
"We build a vocabulary for the natural language tokens, and train an embedding matrix for it.",
"Afterwards, we use a bi-directional LSTM to encode the input natural language sequence (de-noted as LSTM nl ), and use the LSTM's output at each timestep as the contextual embedding vector for each token.",
"Code context encoder.",
"We build a vocabulary V c for the code context, and train an embedding matrix for it.",
"V c also includes the special tokens { VAR , DF , STR } used for sketch decoding in Section 4.2.",
"We train another bi-directional LSTM (LSTM c ), which computes a contextual embedding vector for each token in a similar way to the natural language encoder.",
"We denote the hidden state of LSTM c at the last timestep as H c .",
"NL-code linking.",
"Capturing the correspondence between the code context and natural language is crucial in achieving a good prediction performance.",
"For example, in Figure 2, PLOTCODER infers that the dataframe column age should be plotted, as this column name is mentioned in the natural language description.",
"Inspired by this observation, we Figure 2: Overview of the PLOTCODER architecture.",
"design the NL-code linking mechanism to explicitly connect the embedding vectors of code tokens and their corresponding natural language words.",
"Specifically, for each token in the code context that also occurs in the natural language, let h c and h nl be its embedding vectors computed by LSTM c and LSTM nl , respectively, we compute a new code token embedding vector as: h (cid:48) c = W l ([ h c ; h nl ]) where W l is a linear layer, and [ h c ; h nl ] is the concatenation of h c and h nl .",
"When no natural language word matches the code token, h nl is the embedding vector of the [EOS] token at the end of the natural language description.",
"When we include this NL-code linking component in the model, h (cid:48) c replaces the original embedding h c for each token in the code context, and the new embedding is used for decoding.",
"We observe that many informative natural language descriptions explicitly state the variable names and dataframe columns for plotting, which makes our NL-code linking effective.",
"Moreover, this component is especially useful when the variable names for plotting are unseen in the training set, thus NL-code linking provides the only cue to indicate that these variables are relevant.",
"We train another LSTM to decode the visualization code sequence, denoted as LSTM p .",
"Our decoder generates the program in a hierarchical way.",
"At each timestep, the model first predicts a token from the code token vocabulary that represents the program sketch.",
"As shown in Figure 2, the program sketch does not include the plotted data.",
"After that, the decoder predicts the plotted data, where it employs a copy mechanism (Gu et al., 2016; Vinyals et al., 2015) to select tokens from the code context.",
"First, we initiate the hidden state of LSTM p with H c , the final hidden state of LSTM c , and the start token is [GO] for both sketch and full program decoding.",
"At each step t , let s t 1 and o t 1 be the sketch token and output program token generated at the previous step.",
"Note that s t 1 and o t 1 are different only when s t 1 { VAR , DF , STR } , where o t 1 is the actual data name with the corresponding type.",
"Let es t 1 and eo t 1 be the embedding vectors of s t 1 and o t 1 respectively, which are computed using the same embedding matrix for the code context encoder.",
"The input of LSTM p is the concatenation of the two embedding vectors, i.e., [ es t 1 ; eo t 1 ] .",
"Attention.",
"To compute attention vectors over the natural language description and the code context, we employ the two-step attention in (Iyer et al., 2018).",
"Specifically, we first use hp t to compute the attention vector over the natural language input using the standard attention mechanism (Bahdanau et al., 2015), and we denote the attention vector as attn t .",
"Then, we use attn t to compute the attention vector over the code context, denoted as attp t .",
"Sketch decoding.",
"For sketch decoding, the model computes the probability distribution among all sketch tokens in the code token vocabulary V c : P r ( s t ) = Softmax ( W s ( hp t + attn t + attp t )) Here W s is a linear layer.",
"For hierarchical decoding, we do not allow the model to directly decode the names of the plotted data during sketch decoding, so s t is selected only from the valid sketch tokens, such as library names, plotting function names, and special tokens for plotted data representation in templates discussed in Section 3.2.",
"Data selection.",
"For s t { VAR , DF , STR } , we use the copy mechanism to select the plotted data from the code context.",
"Specifically, our decoder includes 3 pointer networks (Vinyals et al., 2015) for selecting data with the type VAR , DF , and STR respectively, and they employ similar architectures but different model parameters.",
"We take variable name selection as an instance to illustrate our data selection approach using the copy Split Train Dev (gold) Test (gold) Dev (hard) Test (hard) All 38971 57 48 827 894 Scatter 11895 19 17 254 276 Hist 8856 14 11 182 175 Pie 574 1 1 14 13 Scatter+Plot 1533 3 1 34 57 KDE 2609 3 5 51 64 Others 13504 17 13 292 309 Table 1: Dataset statistics.",
"mechanism.",
"We first compute v t = W v ( attn t ) , where W v is a linear layer.",
"For the i -th token c i in the code context, let hc i be its embedding vector, we compute its prediction probability as: P r ( c i ) = exp v Tt hc i (cid:80) j exp v Tt hc j After that, the model selects the token with the highest prediction probability as the next program token o t , and uses the corresponding embedding vectors for s t and o t as the input for the next decoding step of LSTM p .",
"The decoding process terminates when the model generates the [EOF] token.",
"In this section, we first describe our dataset for visualization code synthesis, then introduce our experimental setup and discuss the results.",
"We build our benchmark upon the JuiCe dataset, and select those that call plotting APIs, including those from matplotlib.pyplot ( plt ), pandas.DataFrame.plot , seaborn ( sns ), ggplot , bokeh , plotly , geoplotlib , pygal .",
"Over 99% of the samples use plt , pandas.DataFrame.plot , or sns .",
"We first extract plot samples from the original dev and test splits of JuiCe to construct Dev (gold) and Test (gold) .",
"However, the gold splits are too small to obtain quantitative results.",
"Therefore, we extract around 1,700 Jupyter notebooks of homeworks and exams from JuiCe's training set, and split them roughly evenly into Dev (hard) and Test (hard) .",
"All remaining plot samples from the JuiCe training split are included in our training set.",
"The length of the visualization programs to be generated varies between 6 and 80 tokens, but the code context is typically much longer.",
"We summarize the dataset statistics in Table 1. 5.2 Evaluation Setup Implementation details.",
"Unless otherwise specified, for the input specification we include K = 3 previous code cells as the local context, which usually provides the best accuracy.",
"We set 512 as the length limit for both the natural language and the code context.",
"For all model architectures, we train them for 50 epochs, and select the best checkpoint based on the program accuracy on the Dev (hard) split.",
"More details are deferred to Appendix A. Baselines.",
"We compare the full PLOTCODER against the following baselines: (1) Hierarchy : the encoder is the same as in the full PLOTCODER , but the decoder directly generates the full program without predicting the sketch.",
"(2) Link : the encoder does not use NL-code linking, and the decoder is not hierarchical.",
"(3) LSTM : the model does not use NL-code linking, copy mechanism, and hierarchical decoding.",
"The encoder still uses two separate LSTMs to embed the natural language and code context, which performs better than the LSTM baseline in prior work (Agashe et al., 2019).",
"(4) + BERT : we use the same hierarchical decoder as the full model, but replace the encoder with a Transformer architecture (Vaswani et al., 2017) initialized from a pre-trained model, and we fine-tune the encoder with other part of the model.",
"We evaluated two pre-trained models.",
"One is RoBERTa-base (Liu et al., 2019), an improved version of BERT-Base (Devlin et al., 2018) pre-trained on a large text corpus.",
"Another is codeBERT (Feng et al., 2020), which has the same architecture as RoBERTa-base, but is pre-trained on GitHub code in several programming languages including Python, and has demonstrated good performance on code retrieval tasks.",
"To demonstrate the effectiveness of target code canonicalization discussed in Section 3.2, we also compare with models that are directly trained on the raw ground truth code from the same set of Jupyter notebooks.",
"We present the program prediction accuracies in Table 2. First, training on the canonicalized code significantly boosts the performance for all models, suggesting that canonicalization improves data quality and hence prediction accuracies.",
"When trained with target code canonicalization, the full PLOTCODER significantly outperforms other model variants on different data splits.",
"On the hard data splits, the hierarchical PLOTCODER predicts 35% of the samples correctly, improving over the non-hierarchical model by 3 4 .",
"5% .",
"Meanwhile, NL-code linking enables the model to better capture the correspondence between the code context and the natural language, and consistently improves the performance when trained on canonicalized target code.",
"Without the copy mechanism, the baseline LSTM cannot predict any token outside of the code vocabulary.",
"Therefore, this model performs worse than other LSTM-based models, especially on plotted data accuracies, as shown in Table 3. Interestingly, while our hierarchical decoding, NL-code linking, and copy mechanism are mainly designed to improve the prediction accuracy of the plotted data, as shown in Table 4, we observe that the plot type accuracies of our full model are also mostly better, especially on the hard splits.",
"To better understand this, we break down the results by plot type, and observe that the most significant improvement comes from the predictions of scatter plots (S) and plots in Others category.",
"We posit that these two categories constitute the majority of the dataset, and the hierarchical model learns to better categorize plot types from a large number of training samples.",
"In addition, we observe that the full model does not always perform better than other baselines on data splits of small sizes, and the difference mainly comes from the ambiguity in the natural language description.",
"We defer more discussion to Section 5.4.",
"Also, using BERT-like encoders does not improve the results.",
"This might be due to the difference in data distribution for pre-training and vocabularies.",
"Specifically, RoBERTa is pre-trained on English passages, which does not include many visualization-related descriptions and code comments.",
"Therefore, the subword vocabulary utilized by RoBERTa breaks down important keywords for visualization, e.g., scatterplots and histograms into multiple words, which limits model performance, especially for plot type prediction.",
"Using codeBERT improves the performance of RoBERTa, but it still does not improve over the LSTM-based models, which may again due to vocabulary mismatch.",
"As a result, in Table 4, the plot type accuracies of both models using BERT-like encoders are considerably lower than the LSTM-based models.",
"To better understand the plotted data prediction performance, in addition to the default plotted data accuracy that requires the data order to be the same as the ground truth, we also evaluate a relaxed Model Test (hard) Dev (hard) Test (gold) Dev (gold) With code canonicalization Full Model 34.79% 34.70% 56.25% 47.37% Hierarchy 30.20% 31.56% 45.83% 47.37% Link 29.98% 28.05% 43.75% 45.61% LSTM 26.17% 24.67% 41.67% 40.35% + CodeBERT 33.11% 34.58% 54.17% 35.09% + RoBERTa 32.77% 33.37% 50.00% 26.32% Without code canonicalization Full Model 20.58% 22.73% 22.92% 28.07% Hierarchy 20.25% 22.85% 18.75% 26.32% Link 20.02% 21.77% 20.83% 24.56% LSTM 16.22% 16.93% 16.67% 24.56% + CodeBERT 20.92% 22.61% 22.92% 24.56% + RoBERTa 20.47% 22.37% 20.83% 24.56% Table 2: Evaluation on program accuracy.",
"version without ordering constraints.",
"Note that the ordering includes two factors: (1) the ordering of the plotted data for the different axes; and (2) the ordering of plots when multiple plots are included.",
"We observe that the ordering issue happens for around 1 .",
"5% of samples, and is more problematic for scatter plots (S) and Others.",
"Figure 3 shows sample predictions where the model selects the correct set of data to plot, but the ordering is wrong.",
"Although sometimes the natural language explicitly specifies which axes to plot (e.g., Figure 3",
"(a)), such descriptions are mostly implicit (e.g., Figure 3",
"(b)), making it hard for the model to learn.",
"Full results on different plot types are in Section 5.4.",
"To evaluate the effect of including different input specifications, we present the results in Table 5.",
"Specifically, NL means the model input does not include the natural language, and Distant DFs means the code context only includes the local code cells.",
"Interestingly, even without the natural language description, PLOTCODER correctly predicts a considerable number of samples.",
"Figure 4 shows sample correct predictions without relying on the natural language description.",
"To predict the plotted Model Test (hard) Dev (hard) Test (gold) Dev (gold) With code canonicalization Full Model 70.58% 71.46% 83.33% 78.95% Hierarchy 64.65% 68.92% 87.50% 82.46% Link 65.32% 64.09% 81.25% 73.68% LSTM 66.67% 67.47% 85.42% 85.96% + codeBERT 65.44% 67.96% 75.00% 57.89% + RoBERTa 65.21% 66.38% 66.67% 54.39% Without code canonicalization Full Model 63.53% 65.66% 72.92% 80.70% Hierarchy 61.41% 67.47% 66.67% 73.68% Link 61.30% 63.72% 64.58% 77.19% LSTM 64.65% 65.78% 81.25% 70.18% + CodeBERT 56.04% 57.07% 60.42% 56.14% + RoBERTa 61.30% 61.91% 68.75% 49.12% Table 4: Evaluation on plot type accuracy.",
"Meanwhile, we evaluated PLOTCODER by varying the number of local code cells K .",
"The results show that the program accuracies converge or start P LOT C ODER : Hierarchical Decoding for Synthesizing Visualization Code in Programmatic Context Anonymous NAACL-HLT 2021 submission Abstract 001 Natural Language Explore the relationship between rarity and a skill of your choice.",
"data, a simple yet effective heuristic is to select variable names appearing in the most recent code context.",
"This is also one possible reason that causes the wrong data ordering prediction in Figure",
"3(a); in fact, the prediction is correct if we change the order of assignment statements for variables age and duration in the code context.",
"whether predicts the skill.",
"1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 ACL-IJCNLP 2021 Submission ***.",
"to decrease when K > 3 for different models, as observed in (Agashe et al., 2019).",
"However, the accuracy drop of our hierarchical model is much less noticeable than the baselines, suggesting that our model is more resilient to the addition of irrelevant code context.",
"See Appendix B for more discussion.",
"We present the breakdown results per plot type in Tables 6 and 7. To better understand the plotted data prediction performance, in addition to the default plotted data accuracy that requires the data order to be the same as the ground truth, we also evaluate a relaxed version without ordering constraints, described as permutation invariant in Table 7. We compute the results on Test (hard), which",
"has more samples per plot type than the gold splits.",
"Compared to the non-hierarchical models, the most significant improvement comes from the predictions of scatter plots (S) and plots in Others category.",
"We posit that these two categories constitute the majority of the dataset, and the hierarchical model learns to better categorize plot types from a large number of training samples.",
"The accuracy of the hierarchical model on some categories is lower than the baseline's, but the difference is not statistically significant since those categories only contain a few examples.",
"A more detailed discussion is included in Appendix C. Model S H Pie S+P KDE Others With code canonicalization Full Model 77.17% 70.86% 61.54% 12.28% 29.69% 84.14% Hierarchy 70.65% 68.00% 76.92% 15.79% 39.06% 71.20% Link 73.55% 68.00% 69.23% 21.05% 35.94% 70.55% LSTM 73.91% 71.43% 69.23% 21.05% 28.13% 73.79% + codeBERT 67.39% 66.29% 76.92% 21.05% 35.94% 77.02% + RoBERTa 61.59% 62.29% 61.54% 10.53% 34.38% 80.58% Without code canonicalization Full Model 71.01% 74.29% 76.92% 12.28% 37.50% 65.05% Hierarchy 75.00% 72.00% 61.54% 14.04% 31.25% 58.25% Link 72.10% 60.57% 69.23% 22.81% 37.50% 63.75% LSTM 74.64% 74.29% 69.23% 19.30% 29.69% 65.70% + codeBERT 71.01% 56.00% 46.15% 14.04% 35.94% 55.02% + RoBERTa 73.91% 47.13% 46.15% 10.53% 29.69% 74.43% Table 6: Plot type accuracy on Test (hard) per type.",
"To better understand the challenges of our task, we conduct a qualitative error analysis and categorize the main reasons of error predictions.",
"We investigate all error cases on Test (gold) split for the full hierarchical model, and present the results in Table 8. We summarize the key observations below, and defer more discussion to Appendix E. Around half of error cases are due to the ambiguity of the natural language description.",
"(1-3) About 10% samples require longer code context for prediction, because the program selects the plotted data from distant code context that exceeds the input length limit.",
"(4) Sometimes the model generates semantically same but syntactically different programs from the ground truth, which can happen when two variables or data frames contain the same",
"data.(5) Besides understanding complex natural language description, as shown in Figure 3, another challenge is to understand the code context and reason about the data stored in different variables.",
"For example, in Figure 5, although both dataframes income data and married af peoples include the age column, the model must infer that married af peoples is a subset of income data , thus it should select income data to plot the statistics of people from all groups.",
"(6-7)",
"In this paper, we conduct the first study of visualization code synthesis from natural language and programmatic context.",
"We describe PLOTCODER , a model architecture that includes an encoder that links the natural language description and code context, and a hierarchical program decoder that synthesizes plotted data from the code context and dataframe items.",
"Results on real-world Jupyter notebooks show that PLOTCODER can synthesize visualization code for different plot types, and outperforms various baseline models.",
"This material is in part based upon work supported by the National Science Foundation under Grant No.",
"TWC-1409915, IIS-1546083, IIS-1955488, IIS-2027575, CCF-1723352, DOE award DE-SC0016260, DARPA D3M under Grant No.",
"FA8750-17-2-0091; Berkeley DeepDrive, the Intel-NSF CAPA center, and gifts from Adobe, Facebook, Google, and VMware.",
"Xinyun Chen is supported by the Facebook Fellowship."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"other",
"objective",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Knowledge-grounded dialogue systems are intended to convey information that is based on evidence provided in a given source text.",
"We discuss the challenges of training a generative neural dialogue model for such systems that is controlled to stay faithful to the evidence.",
"Existing datasets contain a mix of conversational responses that are faithful to selected evidence as well as more subjective or chit-chat style responses.",
"We propose different evaluation measures to disentangle these different styles of responses by quantifying the informativeness and objectivity.",
"At training time, additional inputs based on these evaluation measures are given to the dialogue model.",
"At generation time, these additional inputs act as stylistic controls that encourage the model to generate responses that are faithful to the provided evidence.",
"We also investigate the usage of additional controls at decoding time using resampling techniques.",
"In addition to automatic metrics, we perform a human evaluation study where raters judge the output of these controlled generation models to be generally more objective and faithful to the evidence compared to baseline dialogue systems.",
"Dialogue systems that strive to be informative teachers are difficult to build, despite recent progress in training end-to-end systems that mimic human language at a linguistic level.",
"These systems benefit from vast training data and great representational capacity; yet there are no controls (or training objectives) available that ensure they are truthful.",
"A more limited goal for a system is to be faithful to one or more source documents that we implicitly trust.",
"Such a system might help educate users about a particular topic through conversational interaction, or it might augment a task-oriented dialogue system by providing additional information about I visit animal shelters fairly often A \"no-kill\" shelter is an animal shelter that does not kill healthy or treatable animals even when the shelter is full, reserving euthanasia for terminally ill animals ...",
"the process involved in, say, adding a new home automation device.",
"We assume that multi-turn conversational interaction can help a human user learn to retain the new material.",
"Here, we investigate ways to stay faithful to information from a text document in a conversation.",
"We approach this problem via the task of knowledge-grounded dialogue , where a system produces a dialogue response using a piece of evidence from a grounding document and a previous conversation history as input (as in Figure 1).",
"Whereas PERSONACHAT -style tasks (Zhang et al., 2018) may focus on dialogue systems that are meant to be engaging, this task focuses instead on systems that are meant to be informative, meaning that they only share verifiable information and exclude subjective or invented personal information.",
"There are existing knowledge-grounded dialogue datasets (e.g. (Ghazvininejad et al., 2018; Dinan et al., 2019; Qin et al., 2019)) that could be appropriate training resources for such an informative dialogue agent.",
"However, we observe that these datasets often contain utterances with varying conversation styles and intents, including some utterances that are more informative and some that are chit-chat utterances or subjective commentary.",
"For instance, in Figure 1, we show an example conversation excerpt from the Wizard of Wikipedia (Dinan et al., 2019) training set.",
"While some utterances are supported by the grounding documents (the second response), others include personal experiences and observations (as in the first response).",
"Because of this mix of conversations styles, we cannot ensure that models naively trained on this data will learn to generate only faithful, informative utterances.",
"In order to avoid this issue, one could collect new datasets where the responses are more explicitly constrained by the evidence, but this could be quite expensive and may be challenging to implement.",
"Instead, in this paper, we propose an alternate approach: we adapt techniques from controllable text generation in order to train dialogue models that learn to disentangle these conversation styles within the data and can be controlled at generation time to produce more grounded responses.",
"We propose using multiple evaluation measures that are relevant to the faithfulness of a response and use these to control the output of two commonly used seq2seq models (GPT-2 (Radford et al., 2019) and T5 (Raffel et al., 2020)).",
"We investigate two methods for adding controllability.",
"First, we integrate control code features based on the evaluation measures as special tokens prepended to the seq2seq input, drawing inspiration from domain-based control codes methods (Keskar et al., 2019).",
"These special tokens are created using information about the gold response at training time, but are set to maximize the groundedness of the responses at generation time.",
"Second, we implement a form of resampling that directly restricts the output to satisfy the proposed evaluation measures.",
"In order to inspect the faithfulness and style of the responses, we use automatic evaluations (in-cluding BLEU and the evaluation measures described) and human evaluations that are designed to focus on the degree to which the response is faithfully representing information from the evidence.",
"Our results show that using these controllable generation techniques can improve the perceived faithfulness and objectivity.",
"We also show that the proposed evaluation measures correlate with the human judgements, indicating that these are appropriate measures for gauging specific aspects of groundedness.",
"Lastly, we conclude the paper with some discussion of examples and possible trade-offs.",
"We introduce a sub-task of knowledge-grounded dialogue where a dialogue agent is intended to be informative and must not share hallucinations , which we define here as any information that is neither inferrable from nor directly stated by external documents.",
"In this task, a system is given evidence from a document (or documents) and a conversation history and must produce a response that is both faithful to the evidence and also natural within the context of the previous conversation utterances.",
"Because this task focuses on being informative to a user, the agent is not allowed to share unsupported or subjective information (this includes invented personal traits e.g. I love dogs, too!).",
"Additionally, it is not sufficient to be purely extractive as information from the evidence may need to be re-phrased to be a conversationally appropriate response (e.g. if a user asked a question that is inferrable from the evidence but not directly stated).",
"To simplify the task for this paper, we assume that an appropriate evidence span, e , has already been labelled.",
"We therefore study how to generate an appropriate response y given the previous conversation history x and a chosen evidence e as input.",
"Our goal is to design a dialogue model that is more faithful and objective in how it relays evidence.",
"We propose using a series of evaluation measures to estimate whether a response is (1) written in an objective voice, (2) not sharing extra information that is not in the document and (3) entailed by the grounding evidence.",
"In the modeling section (Sec. 4), we describe how we incorporate these measures into a controllable generation framework.",
"Objective Voice One form of hallucination is when a dialogue agent might share personal stories or opinions.",
"It is common for dialogue agents to learn this behavior as many dialogue datasets contain instances of personal chit-chat even if the task is aimed at grounded language.",
"We estimate objective voice as a binary variable based on the presence of first person singular pronouns detected using a word list.",
"Lexical Precision We also want to ensure that the response is not adding extra information from what's in the selected evidence.",
"To estimate this, we measure the precision of the unigrams in the response with respect to the evidence.",
"A high value indicates that most of the words in the response are contained somewhere in the evidence.",
"We use this measure because it is relevant to grounding precision scores in previous work (Tian et al., 2020) and because it can reasonably gauge how extractive the response is, but one drawback of this measure is that it is based on lexical features which may not reflect semantic differences in the information being shared (e.g. dropping the word not' may yield high lexical precision but a very different semantic meaning from the original evidence).",
"We leave investigation of more semantic-oriented measures of the precision of information to future work.",
"Entailment Lastly, we want to encourage the model to produce a response that is semantically entailed by the source document.",
"We use a state-of-the-art natural language interference (NLI) model (Roberta trained on MNLI (Liu et al., 2019)) to estimate if a response is entailed by the evidence.",
"1 3 Data Wizard of Wikipedia (Dinan et al., 2019) is a recent, large-scale dataset of multi-turn knowledge-grounded dialogues between a apprentice and a wizard, who has access to information from Wikipedia documents.",
"The wizard labelled evidence spans within documents for each utterance they made.",
"Additionally, the development and test sets are split into two portions depending on if the conversation is about a topic that was seen or unseen in the training data.",
"We use the gold-labelled evidence as input to the model in order to focus on improving the quality of generating responses given such evidence and the previous dialogue history.",
"We also focus on only modeling the utterances by the wizard in the cases where they are responding to the apprentice.",
"We include data statistics in Table 1 and an example conversation excerpt in Figure 1.",
"1 We aggregate neutral and contradiction as non-entailing because we care mainly about detecting entailment rather than the distinctions between the other two standard NLI categories.",
"We note that even though Wizard of Wikipedia is a knowledge-grounded dataset, there are many utterances that also include information external to the evidence (as noted in Figure 1).",
"Many conversation turns relay evidence while also embellishing with chit-chat, opinion sharing, or interlocutors' own intuitions and world knowledge.",
"This is because this dataset was collected by asking human crowdworkers to converse with each other, and it is natural for humans to embellish and personalize their conversations even when discussing a document.",
"Yet, for our goal of training informative dialogue agents, we need to train models that only relay information that is found in the evidence.",
"In order to avoid collecting new data, which is costly and challenging, we investigate how to train models with this data while discouraging them from hallucinating extra information that cannot be confirmed in the evidence.",
"One way to deal with this challenge might be to only train with the portions of the data where the response is highly grounded by the evidence.",
"However, in our calculations (bottom of Table 1), we find that as much as 44% of training set responses are in first person and only 23% of responses are predicted to be entailed by the evidence, which indicates that a large portion of training data would have to be excluded.",
"Instead, our paper proposes a modeling technique in which we incorporate different input features de-noting different conversational styles.",
"We can then train the model in a way that learns to use these features to disentangle the differences between utterances that are more faithful to the evidence vs. other types of utterances.",
"We investigate how to add controllable features to a large neural dialogue model in order to constrain",
"the amount of hallucinated text while also taking advantage of the underlying fluency of a large end-to-end neural model.",
"As our underlying dialogue model, we use neural seq2seq architectures T5 (Raffel et al., 2020) and GPT-2 (Radford et al., 2019), which are architectures used in state-of-the-art dialogue systems (e.g. DialoGPT (Zhang et al., 2020)).",
"We fine-tune these models on our grounded dialogue dataset.",
"The input to the model is a sequence of evidence tokens e 1 ...e p and a dialogue history which we treat as a sequence of tokens x 1 ...x m where the utterances are delimited by the speaker ID (ei-ther <speaker1> or <speaker2> ).",
"For the GPT-2 model, we also include special token-type embeddings that are added to the byte-pair embedding tokens and position embeddings.",
"The token-type embeddings denote the segments of the input that belong to the evidence and the two different speakers.",
"We train the model to produce the next conversation utterance y 1 ...y n by minimizing the cross-entropy: LCE = 1 n n (cid:88) i =1 log p ( y i | y <i , x, e ) (1) Caveats of generative language models As noted by the documentation with the GPT-2 release, we lack a complete understanding of language mod-els' robustness and worst case behaviors.",
"Even though training data for GPT-2 and T5 have been carefully selected, these large datasets may contain sources with unfair distributions and factual inaccuracies, and thus the models and the resulting generated synthetic data may have inherited these biases.",
"Additionally, the output generated by these models may only succeed in being superficially similar to human-written text or dialogue turns.",
"We describe two methods of adding controllability to the dialogue models to enhance the groundedness according to the evaluation measures from Sec. 2.1.",
"First, we incorporate control features into the input of the model.",
"Second, we describe additional decoding-time techniques using resampling.",
"We add control features as a way of encouraging the underlying language model to disentangle different conversations styles at training time.",
"We implement this using the control code approach previously introduced in CTRL (Keskar et al., 2019).",
"First, we use the measures introduced in Section 2.1 to create control feature tokens based on how much of the content of the response is grounded in the gold labelled evidence.",
"The control feature tokens c 1 ...c n are prepended to the other tokens.",
"The training objective therefore becomes: LCE = 1 n n (cid:88) i =1 log p ( y i | y <i , x, e, c ) (2) At training time, we set control feature tokens based on measures of entailment, lexical precision, and objective voice of the gold response.",
"At decoding time, control codes are set to the desired valued for these qualities (high entailment, high lexical precision, objective voice).",
"Objective Voice In order to encourage the model to be only relaying objective information from the evidence, we include a control code for whether or not the utterance contains first-person pronouns ( <first-person> , <no-first-person> ).",
"At decoding time, we always use the <no-first-person> control token.",
"Lexical Precision We measure the lexical precision of the response with respect to the evidence, splitting the training utterances into three terciles (high, medium, and low).",
"We map the terciles to control codes to denote the precision level ( <high-prec> , <med-prec> , and <low-prec> ).",
"At decoding time, we always use <high-prec> .",
"Entailment We add control codes for the output of the NLI classifier ( <entailed> , <non-entailed> ).",
"At decoding time, we always use <entailed> .",
"Whereas the control code method implicitly teaches the model to use different styles, some applications may require more direct control over the model output.",
"Additionally, there may be situations where a dialogue system cannot be re-trained.",
"We therefore also investigate a method of implementing more direct control at decoding time.",
"We experiment with a resampling method that continues to sample responses until one is found that satisfies the evaluation measures (high lexical precision, objective voice, and predicted entailment).",
"To save on computational efficiency, we use a cut-off to avoid resampling more than d times.",
"We perform experiments using automatic metrics and human judgments to evaluate the effectiveness of the proposed controllable dialogue system and its various components.",
"We use the HuggingFace library (Wolf et al., 2020) versions of GPT-2 and T5.",
"We select training hy-perparameters based on cross-entropy of the development set.",
"We use a learning rate of 8 E 5 and maximum gradient norm of 1 , 3 .",
"5 for GPT-2, T5 respectively with ADAM to minimize the training loss (with 200 warm-up steps).",
"If the total sequence length is greater than 1024, we truncate the previous conversation turns until the sequence is short enough.",
"We train for three epochs for all models.",
"For decoding, we use nucleus sampling (Holtzman et al., 2020) with p = 0 .",
"6 and a minimum generation length of five tokens (based on better BLEU performance with the development set).",
"In our experiments with resampling, we arbitrarily set d = 10 .",
"We use both automatic metrics (Sec. 5.3 and 5.4) and human ratings (Sec. 5.5) to better understand performance of our model and the effect of controllable features.",
"First, we use BLEU to compare the model output to a gold reference.",
"While BLEU gives a general sense of the fluency, there are drawbacks to word-overlap metrics for evaluating open-ended generations like dialogue (Liu et al., 2016).",
"Additionally, comparing to a gold reference answer fails to measure the underlying question we hope to interpret: whether the response is more objective and grounded to the evidence.",
"Therefore, we also evaluate the output using the proposed evaluation measures from Section 2.1.",
"In addition to lexical precision, we also report the lexical recall of words from the evidence.",
"But, the controllable models are controlled using the same evaluation measures, so we expect that these models may have an advantage in these metrics.",
"Thus, we rely more on human evaluations (Section 5.5).",
"We ask humans to evaluate the quality along multiple aspects including whether the response is fluent, relevant, supported/faithful, and objective.",
"First we conduct an ablation study to investigate the effects of each individual control code feature being used as model input.",
"Table 2 shows the results on the seen topics portion of the Wizard of Wikipedia development set.",
"Unsurprisingly, each BLEU Objectiv.",
"control feature generally helps in improving on the measure that was used in its training.",
"However, we also find, more generally, that each type of control code feature does improve over the base model on all metrics.",
"Results also show that using all control code features together generally improves the performance across the automatic metrics.",
"We show results on both portions of the Wizard of Wikipedia test set in Table 3.",
"As baselines, we use finetuned GPT-2 and T5 without any controllable features or resampling.",
"We also include results the end-to-end generative model (E2E) with gold knowledge that was introduced in the original Wizard of Wikipedia paper (Dinan et al., 2019) and the model in the follow-up work on dodecaDialogue (Shuster et al., 2020).",
"These are transformer-based architectures that use the evidence and conversation history as inputs but do not explicitly control the model to be more faithful to the input.",
"In general, we find that models with pre-trained or multi-task training set-ups (dodecaDialogue, GPT-2, and T5) have relatively consistent performance across both the seen and unseen topic partitions of the test set, indicating that these models can generalize fairly well to unseen topics.",
"Results generally show improvements over the baselines when using control codes.",
"By additionally using resampling at decoding time, we see further improvements, though resampling is not as effective on its own.",
"One explanation why resampling is not as effective is that it may be unable to find a satisfactory response within d resampling Model Fluency Relevance Faithfulness Objectivity E2E model (Dinan et al., 2019) 5 .",
"turns, particularly if the underlying model has not been already trained in a controllable set-up.",
"Supporting this, we find that different choices of d has more of an impact on performance with the just resampling model than with the control code + resampling model.",
"The controllable T5 models generally outperform all of the other models in terms of the metrics from Section 2.1.",
"This may not be so surprising since these models are using the same metrics for control inputs at training time.",
"The dodecaDialogue model outperforms our best model variant in the BLEU and recall metrics, but this may also be related to the longer average token length of output of that model (19 tokens on average) in comparison to our model (16 tokens on average).",
"In order to get a more conclusive understanding of the performance differences, we perform a human evaluation study, described below.",
"We use human evaluations to gauge performance across multiple aspects of quality.",
"One aspect which we focus on is how much the information in the responses is grounded in the evidence, which we consider to be a strong requirement for this task.",
"But, there are also other complementary aspects of response quality that are important (e.g. being appropriate to the conversational context).",
"Therefore, we ask raters to judge a random subsample of model responses from the test set in terms of four qualities: fluency (how understandable and proficient the language is), relevance (whether it is an appropriate reply to the conversation history), faithfulness (whether the reply is fully supported by the evidence), and objectivity (whether the reply is fully objective, rather than sharing personal feelings or experiences).",
"2 2 The exact phrasing of the questions given to human raters is in the appendix.",
"We subsample examples from the seen topics test set, using 100 examples per model variant with 3 human raters per example.",
"In order to give raters more flexibility, they are asked to rate each quality on a Likert scale from 1 (low quality) to 5 (high quality).",
"We measure the agreement for each of the four qualities separately using Krippendorff's alpha and find that the agreement (0.8, 0.91, 0.88, 0.96 respectively) is reliably high.",
"In Table 4, we include the averaged results from the human study.",
"We provide asterisks in every case where a metric is significantly different from the best result (bolded), as found with Welch's t-test.",
"By adding the control code features and resampling, we do not see a drop in the fluency, which is similarly high across all of the models.",
"In fact, we see that most of the trade-off is between the relevance of the response vs. the faithfulness and objectivity.",
"Our results show the faithfulness and objectivity of the T5 models with control codes is significantly higher than in the uncontrolled models (top three rows).",
"This is a promising indication that adding these controllable features significantly steers the generations towards making more grounded, objective responses, with only a slight decrease in relevance.",
"Including resampling is not as effective in promoting faithfulness and objectivity as the control codes, though more faithful and objective than the base T5 model.",
"By using both control codes and resampling (bottom row), the T5 model is able to achieve nearly the same level of faithfulness and objectivity as with just using control codes, but with higher relevance subscores.",
"For the full set of annotated examples, we also find that the human scores for faithfulness and objectivity correlate with measurements from the evaluation measures that we described in Section 2.1.",
"For instance, the absence of first person strongly correlates with higher objectivity according to human raters (Pearson r value of 0.8 at p value < 0 . 001 ).",
"Lexical precision and entailment measures both strongly correlate with human perceptions of faithfulness and objectivity, as well.",
"3 This confirms that the evaluation measures that we propose using as controls can be appropriate estimates for how humans might perceive the groundedness of a response.",
"However, these metrics do not correlate to relevance or fluency.",
"Based on these observations, it seems that these measures can be useful to gauge the general groundedness of the response but should still be viewed in tandem with other quality scores to get a more holistic understanding of performance.",
"In Table 5, we highlight some examples of model output (we also provide additional examples in the appendix).",
"The responses in the controllable models tend to be more concise in relaying information from the evidence.",
"In the first example, the controllable model only shares information that is entailed by the evidence, excluding extra information about spices that is not easily verifiable within the document.",
"This may also come with a slight trade-off with the relevance of the replies, as in the second example where the response while more faithful to the evidence is not quite as pertinent to the previous conversation turn.",
"Similarly, in the third example, the full model is faithfully citing the evidence but is too extractive to the extent of including irrelevant details.",
"In the last example in Table 5, both the models make the same error where they incorrectly give an affirmative answer to the user's question about George Foreman even though they both identify Michael Boehm as the correct inventor (a better answer would be No, it was Michael Boehm.).",
"This example is challenging because the answer to the user's question is not directly stated in the evidence and requires extra inference rather than just extracting relevant words.",
"To address these challenges, one area for future work may be investigating approaches that combine extractive and abstractive generation methods to be more deliberately selective about which portions of evidence are being used and how they are integrated with information about the conversational discourse.",
"Knowledge-Grounded Dialogue There has been significant prior work in tasks for designing dialogue agents that are grounded by document knowledge (Dinan et al., 2019; Qin et al., 2019; Ghazvininejad et al., 2018; Tian et al., 2020; Gopalakrishnan et al., 2019; Moghe et al., 2018).",
"Some of these works investigate retrieving appropriate evidence (Lian et al., 2019; Meng et al., 2020; Kim et al., 2020), while we assume that a piece of evidence has already been retrieved and focus instead on how to craft generations that are more faithful to it.",
"Our work is also novel in investigating controllable generation as one way of disentangling evidence-based utterances from more subjective utterances that may be present in the training data.",
"Controlling hallucinations in text generation There is a body of work that has previously studied methods for integrating evidence in natural language generation tasks, with a focus on reducing hallucinations.",
"Many of these works focus on other generation tasks such as summarization (Maynez et al., 2020; Zhao et al., 2020; Cao et al., 2018; Falke et al., 2019) or data-to-text generation (Puduppully et al., 2019).",
"We investigate how the problem of reducing hallucinations can be applied to the task of knowledge grounded dialogue.",
"Similar to our approach, Filippova (2020) also uses control codes to reduce hallucinations but focused instead on data-to-text generation tasks.",
"Controllable Text Generation In order to control the faithfulness of responses, we draw on techniques from controllable text generation tasks.",
"Most relevant is the development of control-code-style input tokens such as in CTRL (Keskar et al., 2019) or the LFT model of Niu and Bansal (2018).",
"Others have used decoding-time re-ranking (Falke et al., 2019) to constrain the outputs in a way that is similar to our resampling method.",
"Controllable generation has also been used previously with open-ended dialogue data (See et al., 2019) to improve qualities such as the engagingness; however, our work focuses on knowledge-grounded dialogues aiming to increase the faithfulness of the replies.",
"Recently, Wu et al. (2020) used control phrases as controllable inputs to decrease hallucination as a form of content planning.",
"We similarly use controllable features to reduce hallucinations in knowledge grounded dialogues, but our model uses stylis-Document Evidence curry (, plural curries) is an umbrella term referring to a number of dishes originating in the cuisine of the indian subcontinent.",
"In this paper, we investigate how to design knowledge grounded dialogue systems that are less prone to including hallucinations or subjective information.",
"We discuss three evaluation measures related to the groundedness of the response and discuss two methods for integrating these metrics into a controllable dialogue system.",
"We demonstrate that this controllable dialogue system is able to produce responses that are perceived by humans to be more objective and faithful to document-based evidence.",
"We would like to thank Slav Petrov and Ankur Parikh as well as the anonymous reviewers for their insightful comments and feedback.",
"We also thank Nouha Dziri for sharing code and data resources.",
"We additionally thank Ashwin Kakarla and his team for helping with human annotations.",
"In this paper, we study the problem of encouraging knowledge grounded dialogue agents to be more faithful in generating information from trusted documents.",
"The controllable models and evaluation measures proposed in this paper could benefit general dialogue applications by constraining their output to only discuss information that is verifiable, which could ensure that these systems are more trustworthy.",
"This could be valuable in a wide range of applications such as educational or information-seeking dialogue settings where the user needs to be given accurate information.",
"As with other conditional generation models, this could also pose a risk if these models were misused by conditioning on evidence from unreliable resources.",
"In our work, we mitigate this risk by carefully considering the source of our evidence and how it was curated.",
"Before applying these models, others should similarly take into consideration whether their evidence sources are reliable and unbiased."
] | [
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"other",
"other",
"objective",
"method",
"method",
"other",
"method",
"abstain",
"other",
"method",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"In this paper, we propose an entity-based neural local coherence model which is linguistically more sound than previously proposed neural coherence models.",
"Recent neural coherence models encode the input document using large-scale pretrained language models.",
"Hence their basis for computing local coherence are words and even sub-words.",
"An analysis of their output shows that these models frequently compute coherence on the basis of connections between (sub-)words which, from a linguistic perspective, should not play a role.",
"Still, these models achieve state-of-the-art performance in several end applications.",
"In contrast to these models, we compute coherence on the basis of entities by constraining the input to noun phrases and proper names.",
"This provides us with an explicit representation of the most important items in sentences leading to the notion of focus.",
"This brings our model linguistically in line with pre-neural models of computing coherence.",
"It also gives us better insight into the behaviour of the model thus leading to better explainability.",
"Our approach is also in accord with a recent study (O'Connor and Andreas, 2021), which shows that most usable information is captured by nouns and verbs in transformer-based language models.",
"We evaluate our model on three downstream tasks showing that it is not only linguistically more sound than previous models but also that it outperforms them in end applications 1 .",
"Coherence describes the semantic relation between elements of a text.",
"It recognizes how well a text is organized to convey the information to the reader effectively.",
"Modeling coherence can be beneficial to any system which needs to process a text.",
"Example Sentence 1 Mr. Specter, seeming exasperated, said in an interview Thursday.",
"Focus candidates captured by XLNet _said, _in, day, _interview, _,, er, _an, th, s, _exasperated, ..., spect Example Sentence 2 At the same time, unadvertised products may have almost identical ingredients but less name-recognition.",
"Focus candidates captured by XLNet _name, ition, _products, -, _un, _may, _less, _ingredients, _have, ..., _same Table 1: The pretrained language model, XLNet Yang et al. (2019), captures undesirable (sub-)words as focus (Jeon and Strube, 2020).",
"Recent neural coherence models (Mesgar and Strube, 2018; Moon et al., 2019) encode the input document using large-scale pretrained language models (Peters et al., 2018).",
"These neural models compute local coherence, semantic relations between items in adjacent sentences, on the basis of words and even sub-words.",
"However, it has been unclear on which basis these models compute local coherence.",
"Jeon and Strube (2020) present a neural coherence model, which allows to interpret focus information for the first time.",
"Their investigation reveals that neural models, adopting large-scale pretrained language models, compute coherence on the basis of connections between any (sub-)words or function words (Table 1, 11).",
"In these cases, the model might capture the focus based on spurious information.",
"While such a model might reach or set the state of the art in some end applications, it will do so for 7787 the wrong reasons from a linguistic perspective.",
"This problem did not appear with pre-neural models, since they compute coherence on the basis of entities.",
"Early work about pronoun and anaphora resolution by Sidner (1981, 1983) assumes that there is one single salient entity in a sentence, its focus, which serves as a preferred antecedent for anaphoric expressions.",
"Centering theory (Joshi and Weinstein, 1981; Grosz et al., 1995) builds on these insights and introduces an algorithm for tracking changes in focus.",
"Centering theory serves as basis for many researchers to develop systems computing local coherence by approximating entities (Barzilay and Lapata 2008; Feng and Hirst 2012; Guinaudeau and Strube 2013, inter alia).",
"In this paper, we propose a neural coherence model which is linguistically more sound than previously proposed neural coherence models.",
"We compute coherence on the basis of entities by constraining our model to capture focus on noun phrases and proper names.",
"This provides us with an explicit representation of the most important items in sentences, leading to the notion of focus.",
"This brings our model linguistically in line with pre-neural models of coherence.",
"Our approach is not only linguistically more sound but also is in accord with a recent empirical study by O'Connor and Andreas (2021) who investigate what contextual information contributes to accurate predictions in transformer-based language models.",
"Their experiments show that most usable information is captured by nouns and verbs.",
"Their findings suggest that we can design better neural models by focusing on specific context words.",
"Our work follows their findings by modeling entity-based coherence in an end-to-end framework to improve a neural coherence model.",
"Our model integrates a local coherence module with a component which takes context into account.",
"Our model first encodes a document using a pretrained language model and identifies entities using a linguistic parser.",
"The local coherence module captures the most related representations of entities between adjacent sentences, the local focus.",
"Then it tracks the changes of local foci.",
"The second component captures the context of a text by averaging sentence representations.",
"We evaluate our model on three downstream tasks: automated essay scoring (AES), assessing writing quality (AWQ), and assessing discourse coherence (ADC).",
"AES and AWQ determine text quality for a given text, aiming to replicate human scoring results.",
"Since coherence is an essential fac-tor in assessing text quality, many previous coherence models are evaluated on AES and AWQ.",
"ADC evaluates coherence models on informal texts such as emails and online reviews.",
"In our evaluation, our model achieves state-of-the-art performance.",
"We also perform a series of analyses to investigate how our model works.",
"Our analyses show that capturing focus on entities gives us better insight into the behaviour of the model, leading to better explainability.",
"Using this information, we examine statistical differences of texts assigned to different qualities.",
"From the perspective of local coherence, we find that texts of higher quality are neither semantically too consistent nor too variant.",
"Finally, we inspect error cases to examine how our model works differently compared to previous models.",
"Entity-based modeling has been the prevailing approach to model coherence in pre-neural models.",
"The entity grid is its most well-known implementation (Barzilay and Lapata, 2008).",
"It represents entities in a two-dimensional array to track their transitions between sentences.",
"Many variations have been proposed to improve this model, e.g., projecting the grid into a graph representation (Guin-audeau and Strube, 2013) or converting the grid to a neural model (Tien Nguyen and Joty, 2017).",
"However, the neural version of the entity grid (Tien Nguyen and Joty, 2017) has two limitations.",
"First, Lai and Tetreault (2018) state that entity grids applied to downstream tasks are often extremely sparse.",
"In their evaluation, it is difficult to find meaningful entity transitions between sentences in the grids.",
"Accordingly, this model performs worse than other neural models.",
"More importantly, this neural model cannot provide any clues of how this model works since Tien Nguyen and Joty (2017) apply a convolutional layer on the entity grid.",
"The feature map of the convolutional layer is not interpretable.",
"They cannot examine which entity is assigned more importance than others by their model.",
"In contrast, we constrain our model to capture focus on entities using noun phrases.",
"Then our model tracks the changes of focus.",
"Hence, it provides us with an interpretable focus (Section 5).",
"More recently, Moon et al. (2019) propose a neural coherence model to exploit both local and structural aspects.",
"They evaluate their model on an arti-7788 ficial task only, the shuffle test, which determines whether sentences in a document are shuffled or not.",
"However, recent studies (Pishdad et al., 2020) claim that this artificial task is not suitable to evaluate coherence models.",
"Lai and Tetreault (2018) show that the neural coherence models, which achieve the best performance on this task, do not outperform non-neural models on downstream tasks.",
"More recently, Mohiuddin et al. (2021) find a weak correlation between the model performance in artificial tasks and downstream tasks.",
"In our evaluation, we compare Moon et al. (2019) with ours in an artificial task as well as in three downstream tasks.",
"Moon et al. (2019) perform the best in the artificial task, but do not outperform our model in three downstream tasks (Section 4).",
"Figure 1 presents the architecture of our model.",
"We first introduce our entity representation and sentence encoding using a pretrained language model.",
"Next, we describe a novel local coherence model.",
"We then combine the two representations of local coherence and the context vector, simply averaged sentence representations.",
"Finally, we apply a feed-forward network to produce a score label.",
"We use a pretrained language model (Yang et al., 2019) to encode sentences.",
"XLNet learns bidirectional contexts by maximizing expected likelihood using an autoregressive training objective.",
"Hence it allows to capture the focus in sentences.",
"XLNet outperforms other language models in tasks which require processing long texts.",
"Recent work investigates that pretrained language models learn linguistic features that are helpful for language understanding (Tenney et al., 2019; Warstadt et al., 2020).",
"Inspired by this, we encode two adjacent sentences at once to capture discourse features, such as coreference relations.",
"In this strategy, items are encoded twice except the items included in the first and the last sentence.",
"We interpolate items encoded twice to consider context with regard to the preceding and succeeding sentence.",
"We encode an input document using XLNet to obtain word representations.",
"Sentence representations are means of all word representations in a sentence.",
"We then feed sentence representations and the noun phrase representations into the the coherence modules.",
"In formal definitions, let E e = [ h ( e,i, 1) , ..., h ( e,i,m ) , h ( e,i +1 , 1) , ..., h ( e,i +1 ,m ) ] denote the output of encoding, where e indicates the index of encoding, and m indicates the index of a subword ( w ) in the sentence ( s i ).",
"h indicates the encoded representation of w .",
"This encoding output includes the encoded representations of s i and s i +1 since we encode two adjacent sentences at once.",
"Likewise, E e +1 = [ h ( e +1 ,i +1 , 1) , ..., h ( e +1 ,i +2 ,m ) ] is the output in the next encoding, and it includes the encoded representations of s i +1 and s i +2 .",
"Then, the encoded representation of s i +1 is a sequence of ih ( i +1 ,m ) = avg ( h ( e,i +1 ,m ) , h ( e +1 ,i +1 ,m ) ) , which is the interpolated representation of s i +1 in the two encoding stages ( e and e + 1 ).",
"We iterate this process to encode all adjacent sentences.",
"Pretrained language models encode sequences as sub-words, but to our knowledge, there is no linguistic parser using sub-words as input.",
"Hence, we use a linguistic parser to identify noun phrases in each sentence separately.",
"Kitaev and Klein (2018) present a neural constituency parser which determines the syntactic structure of a sentence.",
"To identify noun phrases and proper names, we ap-7789 ply this parser to the original sentences, then map parsed constituents to sub-word tokens.",
"Since pretrained language models do not have the means to represent phrase meaning composition, we average sub-word representations for phrases which consists of multiple sub-words.",
"While this implementation does not capture the complex meaning of phrases, Yu and Ettinger (2020) report that it shows higher correlation with human annotations than using the last word of phrases, assuming that the last word of a phrase is its head.",
"Let NP i = [ np i, 1 , np i, 2 , ..., np i,j ] denote a sequence of noun phases ( np ) in the i th sentence, and j indicates the index of a noun phrase in the sentence.",
"Each representation of a noun phrase is obtained as np i,j = avg ( ih i, 1 , ..., ih i,k ) , where ih i,k indicates the subword tokens contributing to the same entity.",
"We compare the semantic representations of noun phrases between adjacent sentences.",
"The two most similar representations of noun phrases are taken as local focus of the respective sentences.",
"These two representations are averaged to capture the common context.",
"We use cosine similarity to measure semantic similarity.",
"We notice that some sentences do not include noun phrases, approximately 3.5% in the three datasets used in our evaluation.",
"This mostly occurs when some words are omitted as in cases of ellipsis (Hardt and Romero, 2004).",
"In such cases, we maintain the focus of the previous sentence to preserve the context.",
"A depthwise convolutional layer is applied to the local focus to record its transitions.",
"Unlike a typical convolutional layer, the depthwise convolutional layer captures the patterns of semantic changes between different time-steps for the same spatial information (Chollet, 2017).",
"In our model, this layer captures the semantic changes between local foci considering the context but on the same spatial dimension of each focus.",
"Hence, it does not hurt the explainability of our model.",
"We use the lightweight depthwise convolutional layer (Wu et al., 2019).",
"Then we update the representations of local foci to track the semantic changes between them.",
"We use the Tree-Transformer which updates its hidden representations by inducing a tree-structure from a document (Wang et al., 2019).",
"It generates constituent priors by calculating neighboring attention which represents the probability of whether adjacent items are in the same constituent.",
"The constituent priors constrain the self-attention of the transformer to follow the induced structure.",
"Finally, we apply document attention to produce the weighted sum of all the updated local focus representations.",
"The document attention identifies relative weights of updated representations which enables our model to handle any document length.",
"In formal descriptions, let mnp l,i denote the representations of two noun phrases which have the highest cosine similarity scores between the i th and i + 1 th sentence.",
"Then, we define LocalF = [ localf 1 , ..., localf l ] , where localf l is an averaged representation of mnp l,i and mnp l,i +1 .",
"It represents the sequence of local foci between the i th and i + 1 th sentence, and l indicates the index of the local focus in the document.",
"Finally, the local coherence representation is obtained as lcr = doc _ attn ( tree _ trans ( dconv ( LocalF ))) where dconv indicates the depthwise convolutional layer, tree _ trans indicates the Tree-Transformer, and doc _ attn indicates the document attention.",
"We implement our model using the PyTorch library and use the Stanford Stanza library 2 for sentence tokenization.",
"We employ XLNet for the pretrained language model.",
"For the baselines which do not employ a pretrained language model (Dong et al., 2017; Mesgar and Strube, 2018), GloVe is employed for word embeddings, trained on Google News (Pennington et al., 2014) (see Appendix A for more details).",
"To compare baselines within the same framework, we re-implement all of them in PyTorch.",
"We then use our re-implementation to report the performance of models with 10 runs with different random seeds.",
"We verify statistical significance (p-value < 0.01) with both a one-sample t-test, which verifies the reproducibility of the performance of each model, and a two-sample t-test, which verifies that the performance of our model is statistically significantly different from other models.",
"Within the same framework we compare the size of models used in our experiments.",
"Our neural model uses a number of parameters comparable to the state of the art, the transformer-based model 2 https://stanfordnlp.github.io/stanza 7790 (Moon et al. (2019): 118M < Jeon and Strube (2020): 136M < Our model: 137M).",
"In all three downstream tasks, we compare our model against recent neural coherence models.",
"First, Mesgar and Strube (2018) propose a neural local coherence model, based on Centering theory.",
"This model connects the most related states of a Recurrent Neural Network, then represents the coherence patterns using semantic distances between the states.",
"Second, Moon et al. (2019) propose a unified neural coherence model to consider local and structural aspects.",
"This model consists of two modules when they employ a pretrained language model (Peters et al., 2018): a module of inter-sentence relations using a bilinear layer and a topic structure module applying a depth-wise convolutional layer to the sentence representations.",
"To ensure fair comparison, XLNet is employed for this model as well, instead of ELMo (Peters et al., 2018).",
"More recently, Jeon and Strube (2020) propose a neural coherence model approximating the structure of a document by connecting linguistic insights and a pretrained language model.",
"This model consists of two sub-modules.",
"First, a discourse segment parser constructs structural relationships for discourse segments by tracking the changes of focus between discourse segments.",
"Second, a structure-aware transformer updates sentence representation using this structural information.",
"We first evaluate our model on the artificial setup, the shuffle test, used in earlier works (Table 2).",
"We follow the setup used in Lai and Tetreault (2018).",
"In this setup, our model outperforms a simple neural model relying on the pretrained language model.",
"Moon et al. (2019) evaluate their models only in this setup.",
"It achieves outstanding performance in this setup.",
"However, in the following sections, our results show that this model does not outperform our model in downstream tasks.",
"This result is not surprising.",
"There is a line of recent work which shows that this setup is not capable of evaluating coherence models from diverse perspectives.",
"Laban et al. (2021) show that employing fine-tuned language models simply achieves a near-perfect accuracy on this setup.",
"O'Connor and Andreas (2021) measure usable information by selectively ablating lexical and structural information in transformer-based language models.",
"Their findings show that prediction accuracy depends on information about local word co-occurrences, but not word order or global position.",
"We suspect that exploiting all information of a sentence is sufficient for shuffle tests to capture patterns to distinguish whether sentences in a document are shuffled or not.",
"Based on these findings, we evaluate our model on three downstream tasks used for evaluating coherence models, automated essay scoring, assessing writing quality, and assessing discourse coherence.",
"We advise future work not to evaluate coherence models on the artificial setup solely.",
"Dataset.",
"To evaluate the coherence models on AES, we evaluate them on the Test of English as a Foreign Language (TOEFL) dataset (Blanchard et al., 2013).",
"While the Automated Student Assessment Prize (ASAP) dataset 3 is frequently used for AES, TOEFL has a generally higher quality of essays compared to essays in ASAP.",
"The prompts in ASAP are written by students in grade levels 7 to 10 of US middle schools.",
"Many essays in ASAP consist of only a few sentences.",
"In contrast, the prompts in TOEFL are submitted for the standard English test for the entrance to universities by nonnative students.",
"The prompts in TOEFL do not vary so much, the student population is more controlled, and essays have a similar length.",
"Evaluation Setup.",
"We follow the evaluation setup of previous work on AES (Taghipour and Ng, 2016).",
"For TOEFL, we evaluate performance with accuracy for the 3-class classification problem with 5-fold cross-validation.",
"We use the same split for the cross-validation, used by Jeon and Strube (2020).",
"The cross-entropy loss is deployed for training.",
"The ADAM optimizer is used for our model with a learning rate of 0.003.",
"We evaluate performance for 25 epochs on the validation set with a mini-batch size of 32.",
"The model which reaches the 3 https://kaggle.com/c/asap-aes 7791 Model Prompt Avg 1 2 3 4 5 6 7 8 Dong et al. (2017) 69.30 66.47 65.84 66.38 68.89 64.20 67.11 65.73 66.74 Mesgar and Strube (2018) 56.25 55.94 55.20 57.20 56.57 55.10 56.97 58.39 56.45 Averaged-XLNet-1S 70.73 69.48 68.98 67.52 72.35 70.94 70.14 69.01 69.89 Moon et al. (2019)-XLNet 73.75 72.13 72.92 73.29 75.12 74.69 72.89 72.09 73.36 Jeon and Strube (2020)-1S 75.10 73.35 74.75 74.18 76.38 74.30 73.61 73.44 74.39 Jeon and Strube (2020)-2S 76.35 75.40 75.00 74.85 77.63 74.06 73.71 74.00 75.12 Our Model 78.38 75.70 76.58 76.56 79.10 76.41 75.03 74.57 76.54 Table 3: AES: TOEFL Accuracy performance comparison on the test sets, 1S indicates that sentences are encoded individually and 2S indicates that two adjacent sentences are encoded at once on the pretrained language model (see Table 12, 13 in the Appendix C for more details).",
"best accuracy on the validation set is then applied to the test set.",
"Baselines.",
"We compare against Dong et al. (2017), a neural model proposed for AES.",
"They present a model consisting of a convolutional layer, followed by a recurrent layer, and an attention layer (Bah-danau et al., 2015) between the adjacent tokens.",
"Results.",
"Table 3 reports the performance on TOEFL.",
"Dong et al. (2017) report better performance than the more recent neural model based on Centering theory (Mesgar and Strube, 2018).",
"A simple model relying on the pretrained language model outperforms this model, which averages all sentence representations (henceforth, Avg-XLNet).",
"Moon et al. (2019) show that their unified model outperforms previous models on the artificial task, the shuffle test.",
"However, it does not outperform the previous models on the AES task.",
"Jeon and Strube (2020) outperform previous models.",
"Finally, our model, which integrates local and structural aspects, achieves state-of-the-art performance.",
"We perform an ablation study to investigate the contribution of individual components.",
"We compare with Jeon and Strube (2020) who encode two adjacent sentences using the pretrained language model (2SentsEnc).",
"Our results verify that this encoding improves performance, but our model benefits from the novel local coherence module even more.",
"Dataset.",
"Louis and Nenkova (2013) create a dataset of scientific articles from the New York Times (NYT) for assessing writing quality.",
"They assign each article to one of two classes by a semi-supervised approach: typical or good.",
"Though articles included in both classes are of good quality overall, Louis and Nenkova (2013) show that lin-NYT Liu and Lapata (2018)-reimpl 54.35 (1.00) Averaged-XLNet-1SentEnc 67.53 (3.48) Moon et al. (2019)-XLNet-1Sent 74.75 (1.27) Jeon and Strube (2020)-1Sent 75.12 (1.10) Jeon and Strube (2020)-2Sents 76.43 (0.88) Our Model 77.52 (0.42) Table 4: AWQ: Mean (standard deviation) accuracy of assessing writing quality on the test sets in NYT.",
"Evaluation Setup.",
"For NYT, we follow the setup used in previous work.",
"Louis and Nenkova (2013) and Ferracane et al. (2019) undersample the dataset to mitigate the bias of the uneven label distribution.",
"Following Ferracane et al. (2019), Jeon and Strube (2020) partition the dataset into 80% training, 10% validation, and 10% test set, respectively.",
"We use the ADAM optimizer with a learning rate of 0.001 and a mini-batch size of 32.",
"We evaluate performance for 25 epochs.",
"Baselines.",
"Liu and Lapata (2018) propose a neural model which induces structural information without a labeled resource.",
"It induces a non-projective dependency structure by structured attention.",
"Results.",
"Table 4 shows the performance on NYT.",
"Ferracane et al. (2019) reported the best performance of the latent learning model for discourse structure (Liu and Lapata, 2018) on NYT.",
"However, Jeon and Strube (2020) show that the good results are due to embeddings obtained by training on the target dataset.",
"They also report that Avg-XLNet outperforms this model which employs Glove embeddings.",
"Moon et al. (2019) show better performance than this simple model, but it does 7792 Model Yahoo Clinton Enron Yelp Avg Acc Li and Jurafsky (2017) 53.5 61.0 54.4 49.1 51.7 Mesgar and Strube (2018) 47.3 (1.8) 57.7 (0.6) 50.6 (1.2) 54.6 (0.3) 52.6 Lai and Tetreault (2018) 54.9 60.2 53.2 54.4 55.7 Avg-XLNet-1Sent 58.0 (3.9) 57.6 (0.3) 54.3 (0.8) 55.9 (0.4) 56.4 Moon et al. (2019)-XLNet-1SentEnc 56.2 (0.5) 61.0 (0.4) 53.6 (0.5) 56.6 (0.4) 56.9 Jeon and Strube (2020)-1SentEnc 56.4 (0.6) 62.5 (0.9) 54.5 (0.4) 56.9 (0.3) 57.6 Jeon and Strube (2020)-2SentsEnc 57.2 (0.5) 63.0 (0.4) 54.4 (0.4) 56.9 (0.2) 57.9 Our Model 58.4 (0.2) 64.2 (0.4) 55.3 (0.3) 57.3 (0.2) 58.9 Table 5: ADC: Mean (standard deviation) accuracy performance on the test sets in GCDC ( : reported performance in Lai and Tetreault (2018)).",
"not outperform Jeon and Strube (2020).",
"Our model achieves state-of-the-art performance.",
"An ablation study of the joint sentence encoding, Jeon and Strube (2020)-2SentsEnc, verifies that our model gains improvements not only from this encoding but also from our local coherence module.",
"Dataset.",
"While previous work evaluates coherence models on formally written texts (Barzilay and Lapata, 2008), GCDC (Lai and Tetreault, 2018) is designed to evaluate coherence models on informal texts, such as emails or online reviews.",
"The dataset contains four domains: Clinton and Enron for emails, Yahoo for questions and answers in an online forum, and Yelp for online reviews of businesses.",
"The quality of the dataset is controlled to have evenly-distributed scores and a low correlation between discourse length and scores 4 .",
"Evaluation Setup.",
"For GCDC, we perform the experiments following previous work (Lai and Tetreault, 2018).",
"We perform 10-fold cross-validation, use accuracy as evaluation measure on the 3-class classification, and use the cross-entropy loss function.",
"Baselines.",
"Li and Jurafsky (2017) propose a neural model based on cliques, that are sets of adjacent sentences.",
"This model uses the cliques taken from the original article as a positive label and uses cliques with randomly permutated ones as a negative label.",
"Lai and Tetreault (2018) show that a simple neural model which uses paragraph information outperforms previous models on GCDC.",
"Results.",
"Table 5 summarizes the performance on GCDC.",
"While Avg-XLNet outperforms previous baselines, other advanced neural models show sim-4 The Pearson correlation between text length and scores is lower than 0.12 in all domains.",
"ilar performance.",
"Our model performs slightly better than Jeon and Strube (2020) with two sentences encoding.",
"This shows that the gains mainly benefit from this encoding strategy.",
"We suspect that Jeon and Strube (2020) do not benefit from structural information since texts on GCDC are not well-organized.",
"The texts mostly consist of a few sentences, and they express the writers' emotion.",
"Based on this, Lai and Tetreault (2018) state that texts of lower quality have sudden topic changes.",
"We also suspect that human annotators recognize important entities in the texts, such as the name of a person in the US government.",
"Since our model consists of several components, we examine the influence of each component on the performance of the AES task.",
"Specifically, we first examine the influence of our local coherence module.",
"Then we examine the influence of the Tree-Transformer compared to a naive Transformer.",
"Lastly, we examine the influence of the depth-wise convolutional layer deployed ahead of the Tree-Transformer.",
"Table 7 shows that each component contributes to the performance meaningfully while the depthwise convolutional layer increases the performance slightly.",
"This suggests that we could design a better component in future work to capture semantic transitions between local foci.",
"5.1 Capturing Focus Using Entities In Centering theory, the focus is described as the most important item in a sentence.",
"Jeon and Strube (2020) capture the focus using attention scores and analyze texts assigned to different qualities using this focus.",
"They state that the focus is difficult to interpret when it is composed of sub-words.",
"To investigate this further, we compare the focus captured on any (sub-)words and the focus constrained to entities.",
"Table 6 indicates that constraining focus to entities leads to better explainability, in particular on NYT.",
"For example, in the NYT-1516415 news article about String theory, a subword of ein is not an interpretable focus.",
"It may, however, include useful information in the vector space for a neural model.",
"In contrast, our entity-based model leads to better explainability.",
"Instead of ein, it provides the more interpretable focus, Einstein, a theoretical physicist.",
"In TOEFL, broad knowl-edge is a more interpretable focus than a focus consisting of the single subword tokens, broad.",
"Table 6 also shows that our model mainly uses pronouns, and noun phrases are playing an important role to represent focus.",
"This suggests that further investigation is needed to understand how language models work on pronouns to process a text.",
"Using interpretable focus information, we investigate differences in focus transitions of texts assigned to different scores.",
"Motivated by the definition of the continue and the shift transition in Centering theory, we define semantic consistency which represents the degree of semantic changes between local foci.",
"Two adjacent sentences are semantically consistent when the semantic similarity ( sim i ) between the local foci ( lf ) is higher than a semantic threshold ( sem ; score ).",
"This threshold is determined as the average of semantic similarities between local foci of adjacent sentences in texts assigned the same score.",
"Otherwise, a semantic transition ( st ) occurs between the local foci: st i = 1 if sim i < sem ; score .",
"Finally, the semantic consistency (SC) is defined as follows: SC = 1 ( count ( st i ) / | lf | ) .",
"Figure 2 illustrates the semantic consistency on TOEFL, and Table 8 shows the statistics of the semantic consistency on texts assigned to different scores.",
"Texts assigned a high score show lower semantic consistency on average.",
"This indicates that texts of higher quality are overall more semantically variant than texts of lower quality.",
"Additionally, we observe that texts assigned a low score show significantly larger proportions of an extreme level of semantic consistency.",
"We define the extreme level as either texts whose semantic consistency is lower than 5 % , indicating texts are highly variant, or texts whose semantic consistency is higher than 75 % , indicating texts are highly consistent.",
"Hence, these findings indicate that texts of lower quality are semantically too variant or too consistent.",
"Texts of higher quality are neither too variant nor too consistent.",
"We next inspect the focus of texts assigned to different scores (see Table 15,16, and 17 in the Appendix D for more details).",
"This shows that pronouns more frequently indicate the local focus in texts of lower quality than in texts of higher quality.",
"The essays in TOEFL are argumentative essays, and good essays should use facts and evidence to support their claim (Wingate, 2012).",
"We observe that texts assigned a low score frequently include claims without convincing evidence.",
"This 7794 1 2 3 4 5 6 7 8 N-th pair of local focus 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 C o s i n e s i m il a r i t y b e t w ee n l o c a l f o c i",
"causes our model to capture focus based on pronouns more frequently in these texts.",
"In contrast, texts assigned a high score include convincing evidence to support claims, and this lets our model capture different types of foci in these texts.",
"Finally, we conduct an error analysis to investigate how our model works differently compared to previous coherence models on TOEFL.",
"We first compare the predicted scores with Moon et al. (2019) and a simple model which only considers context, averaged-XLNet.",
"These two baselines show biased predictions in the middle score.",
"We suspect that this is caused by the label bias in TOEFL (Blan-chard et al., 2013).",
"Biased label distributions cause biased predictions, and they benefit from these biased predictions.",
"In contrast, our model benefits more from predicting high scores correctly as well as other scores, indicating that our coherence model assesses text quality better.",
"We then compare with the previous state of the art (Jeon and Strube, 2020).",
"This baseline induces discourse structure to model structural coherence.",
"It captures semantic relations between discourse segments, not just between adjacent sentences.",
"We observe two error cases when this baseline struggles to predict correctly.",
"It predicts scores lower than the ground-truth score for texts which lack support and evidence for claims.",
"However, these texts have a well-organized paragraph for one or two claims.",
"We suspect that this leads human annotators to assign a mid or a high score though the text is not well-organized overall.",
"In contrast, it predicts scores higher than ground-truth scores when unrelated claims are listed or claims are listed S Low S Mid S High Avg SC 55.87 54.45 54.05 (std) (24.53) (21.38) (19.70) Prop of Ext level 17.63 11.54 8.59 Table 8: Semantic consistency statistics (%) for the texts assigned to different scores ( S ).",
"without evidence.",
"Our model, which captures local coherence between adjacent sentences, deals with these cases better (see Table 18 and 19 in the Appendix D for more details).",
"We propose a neural coherence model based on entities by constraining the input to noun phrases.",
"This makes our model better explainable and sets a new state of the art in end applications.",
"It also allows us to reveal that texts of higher quality are neither semantically too consistent nor too variant.",
"Our findings suggest a few interesting directions for future work.",
"Our analysis shows that pretrained language models frequently exploit coreference relations to capture semantic relations.",
"We could design an advanced neural model which exploits these relations explicitly.",
"Lastly, our work could be extended to a multilingual setup.",
"Our model is not tied to a specific pretrained language model but connect a language model with linguistic insights.",
"It can employ a multilingual model (Xue et al., 2021), and our datasets can be translated to other languages.",
"The authors would like to thank the anonymous reviewers for their comments.",
"This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany.",
"The first author has been supported by a Heidelberg Institute for Theoretical Studies Ph.D. scholarship."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"method",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"result",
"method",
"objective",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Character-level models of tokens have been shown to be effective at dealing with within-token noise and out-of-vocabulary words.",
"However, they often still rely on correct token boundaries.",
"In this paper, we propose to eliminate the need for tokenizers with an end-to-end character-level semi-Markov conditional random field.",
"It uses neural networks for its character and segment representations.",
"We demonstrate its effectiveness in multilingual settings and when token boundaries are noisy: It matches state-of-the-art part-of-speech taggers for various languages and significantly outperforms them on a noisy English version of a benchmark dataset.",
"Our code and the noisy dataset are publicly available at http: //cistern.cis.lmu.de/semiCRF .",
"Recently, character-based neural networks (NNs) gained popularity for different tasks, ranging from text classification (Zhang et al., 2015) and language modeling (Kim et al., 2016) to machine translation (Luong and Manning, 2016).",
"Character-level models are attractive since they can effectively model morphological variants of words and build representations even for unknown words, suffering less from out-of-vocabulary problems (Pinter et al., 2017).",
"However, most character-level models still rely on tokenization and use characters only for creating more robust token representations (Santos and Zadrozny, 2014; Lample et al., 2016; Ma and Hovy, 2016; Plank et al., 2016).",
"This leads to high performance on well-formatted text or text with misspellings (Yu et al., 2017; Sakaguchi et al., 2017) but ties the performance to the quality of the tokenizer.",
"While humans are very robust to * Work was done at Center for Information and Language Processing, LMU Munich.",
"noise caused by insertion of spaces (e.g., car ni-val) or deletion of spaces (deeplearning), this can cause severe underperformance of machine learning models.",
"Similar challenges arise for languages with difficult tokenization, such as Chinese or Vietnamese.",
"For text with difficult or noisy tokenization, more robust models are needed.",
"In order to address this challenge, we propose a model that does not require any tokenization.",
"It is based on semi-Markov conditional random fields (semi-CRFs) (Sarawagi and Cohen, 2005) which jointly learn to segment (tokenize) and label the input (e.g., characters).",
"To represent the character segments, we compare different NN approaches.",
"In our experiments, we address part-of-speech (POS) tagging.",
"However, our model is generally applicable to other sequence-tagging tasks as well since it does not require any task-specific hand-crafted features.",
"Our model achieves state-of-the-art results on the Universal Dependencies dataset (Nivre et al., 2015).",
"To demonstrate its effectiveness, we evaluate it not only on English but also on languages with inherently difficult tokenization, namely Chinese, Japanese and Vietnamese.",
"We further analyze the robustness of our model against difficult tokenization by randomly corrupting the tokenization of the English dataset.",
"Our model significantly outperforms state-of-the-art token-based models in this analysis.",
"Our contributions are: 1) We present a truly end-to-end character-level sequence tagger that does not rely on any tokenization and achieves state-of-the-art results across languages.",
"2) We show its robustness against noise caused by corrupted tokenization, further establishing the importance of character-level models as a promising research direction.",
"3) For future research, our code and the noisy version of the dataset are publicly available at http://cistern.cis.",
"lmu.de/semiCRF .",
"The input to our model is the raw character sequence.",
"We convert each character to a one-hot representation.",
"Out-of-vocabulary characters are represented with a zero vector.",
"Our vocabulary does not include the space character since there is no part-of-speech label for it.",
"Instead, our model represents space as two space features (lowest level in Figure 1): two binary dimensions indicate whether the previous or next character is a space.",
"Then, a linear transformation is applied to the extended one-hot encoding to produce a character embedding.",
"The character embeddings are fed into a bidirectonal LSTM (biLSTM) (Hochreiter and Schmidhuber, 1997) that computes context-aware representations.",
"These representations form the input to the segment-level feature extractor.",
"Our model partitions a sequence of characters x = { x 1 , . . . , x T } of length T , into (token-like) segments s = { s 1 , . . . , s | s | } with s j = (cid:104) a j , d j , y j (cid:105) where a j is the starting position of the j th segment, d j is its length and y j is its label.",
"Thus, it assigns the same label y j to the whole segment s j .",
"The sum of the lengths of the segments equals the number of non-space characters: (cid:80) | s | j =1 d j = T .",
"1 The semi-CRF defines the conditional distribution of the input segmentations as: p ( s | x )= 1 Z ( x ) exp ( (cid:80) | s | j =1 F ( s j , x )+ A ( y j 1 , y j )) Z ( x )= (cid:80) s (cid:48) S exp ( (cid:80) | s (cid:48) | j =1 F ( s (cid:48) j , x )+ A ( y (cid:48) j 1 , y (cid:48) j )) where F ( s j , x ) is the score for segment s j (includ-ing its label y j ), and A ( y t 1 , y t ) is the transition score of the labels of two adjacent segments.",
"Thus, p ( s | x ) jointly models the segmentation and label assignment.",
"For the normalization term Z ( x ) , we sum over the set of all possible segmentations S .",
"The score F ( s j , x ) is computed as: F ( s j , x ) = w (cid:62) y j f ( s j , x ) + b y j where W = ( w 1 , . . . , w | Y | ) (cid:62) R | Y | D and 1 For efficiency, we define a maximum segment length L : d j < L, 1 j | s | .",
"L is a hyperparameter.",
"We choose it based on the observed segment lengths in the training set.",
"b = ( b 1 , . . . , b | Y | ) (cid:62) R | Y | are trained parameters, f ( s j , x ) RD is the feature representation of the labeled segment s j , | Y | is the number of output classes and D is the length of the segment representation.",
"For training and decoding, we use the semi-Markov analogies of the forward and Viterbi algorithm, respectively (Sarawagi and Cohen, 2005).",
"In order to avoid numerical instability, all computations are performed in log-space.",
"Sarawagi and Cohen (2005) and Yang and Cardie (2012) compute segment-level features by handcrafted rules.",
"Recent work learns the features automatically with NNs (Kong et al., 2015; Zhuo et al., 2016).",
"This avoids the manual design of new features for new languages/tasks.",
"We adopt Gated Recursive Convolutional Neural Networks (grConv) (Cho et al., 2014; Zhuo et al., 2016) since they allow to hierarchically combine features for segments.",
"We argue that this is especially useful for compositionality in language.",
"An example is the word airport which can be composed of the segments air and port.",
"GrConv constructs features by recursively combining adjacent segment representations in a pyramid shape way (see Figure 1).",
"The d th level of the pyramid consists of all representations for segments of length d .",
"The first level holds the character representations from our biLSTM.",
"where WL , WR RD D and b w RD are globally shared parameters, L , M and R are gates, g is a non-linearity and denotes element-wise multiplication.",
"The gates are illustrated in the blue box of Figure 1 and described in (Zhuo et al., 2016).",
"Our implementation is in PyTorch (Paszke et al., 2017).",
"Hyperparameters are tuned on the development set.",
"We use mini-batch gradient descent with a batch size of 20 and Adam (Kingma and Ba, 2014) as the optimizer.",
"The learning rate is 1e-3, the coefficients for computing running averages of the gradient and its square are 0.9 and 0.999, respectively.",
"A term of 1e-8 is added to the denominator for numerical stability.",
"We use character embeddings of size 60 and three stacked biLSTM layers with 100 hidden units for each direction.",
"For the semi-CRF, we set the maximum segment length to L = 23 as tokens of bigger length are rarely seen in the training sets.",
"To avoid overfitting, we apply dropout with a probability of 0.25 on each layer including the input.",
"For input dropout, we randomly replace a character embedding with a zero vector, similar to Gillick et al. (2016).",
"This avoids overfitting to local character patterns.",
"Moreover, we employ early stopping on the development set with a minimum of 20 training epochs.",
"We run our experiments on a gpu which speeds up the training compared to multiple cpu cores considerably.",
"We assume that it especially benefits from parallelizing the computation of each level of the grConv pyramid.",
"Data and Evaluation.",
"To compare our model to state-of-the-art character-based POS taggers, we evaluate its accuracy on the English part of the Universal Dependencies (UD) v1.2 dataset (Nivre et al., 2015).",
"For multilingual experiments, we use the English (EN), Chinese (ZH), Japanese (JA) and Vietnamese (VI) part of UD v2.0 2 (Nivre and 2 UD v1.2 does not provide data for JA, VI, ZH.",
"Zeljko Agic, 2017), using the splits, training and evaluation rules from the CoNNL 2017 shared task (Zeman et al., 2017).",
"In particular, we calculate joint tokenization and UPOS (universal POS) F 1 scores.",
"Baselines for UD v1.2.",
"We compare our model to two character-based models that are state of the art on UD v1.2: bilstm-aux (Plank et al., 2016) and CNN Tagger (Yu et al., 2017).",
"We also compare to a state-of-the-art word-based CRF model MarMot 3 (Muller and Schutze, 2015).",
"Results on English (UD v1.2).",
"Table 1 provides our results on UD v1.2, categorizing the models into token-level ( (cid:126)w ) and character-only models ( (cid:126)c ).",
"While most pure character-level models cannot ensure consistent labels for each character of a token, our semi-CRF outputs correct segments in most cases (tokenization F 1 is 98.69%, see Table 4), and ensures a single label for all characters of a segment.",
"Our model achieves the best results among all character-level models and comparable results to the word-level model MarMot.",
"In addition, we assess the impact of two components of our model: the space feature (see Section 2.1) and grConv (see Section 2.2.1).",
"Table 1 shows that the performance of our model decreases when ablating the space feature, confirming that information about spaces plays a valuable role for English.",
"To evaluate the effectiveness of grConv for segment representations, we replace it with a Segmental Recurrent Neural Network (SRNN) (Kong et al., 2015).",
"4 SRNN uses dynamic programming and biLSTMs to create segment representations.",
"Its performance is slightly worse compared to grConv (last row of Table 1).",
"We attribute 3 http://cistern.cis.lmu.de/marmot/ 4 In an initial experiment, we also replaced it with a simpler method that creates a segment representation by subtracting the character biLSTM hidden state of the segment start from the hidden state of the segment end.",
"This is one of the segment-level features employed, for instance, by Ye and Ling (2018).",
"However, this approach did not lead to promising results in our case.",
"We assume that more sophisticated methods like grConv or SRNN are needed in this setup.",
"this to the different way of feature creation: While grConv hierarchically combines context-enhanced n-grams, SRNN constructs segments in a sequential order.",
"The latter may be less suited for compositional segments like airport.",
"Baselines for UD v2.0.",
"We compare to the top performing models for EN, JA, VI, ZH from the CoNLL 2017 shared task: UDPipe 1.2 (Straka and Strakova, 2017), Stanford (Dozat et al., 2017), FBAML (Qian and Liu, 2017), TRL (Kanayama et al., 2017), and IMS (Bjorkelund et al., 2017).",
"Multilingual Results (UD v2.0).",
"Table 2 provides our results.",
"While for each language another shared task system performs best, our system performs consistently well across languages (best or second-best except for EN), leading to the best average scores for both tokenization and POS tagging.",
"Moreover, it matches the state of the art for Chinese (ZH) and Vietnamese (VI), two languages with very different characteristics in tokenization.",
"To further investigate the robustness of our model, we conduct experiments with different levels of corrupted tokenization in English.",
"We argue that this could also give us insights into why it performs well on languages with difficult tokenization, e.g., on Chinese which omits spaces between tokens, or on Vietnamese which has spaces inside tokens, after each syllable.",
"Note that we do not apply input dropout for these experiments, since the corrupt tokenization already acts as a regularizer.",
"Data.",
"We are not aware of a POS tagging dataset with corrupted tokenization.",
"Thus, we create one based on UD v1.2 (EN).",
"For each token, we either delete the space after it with probability P = p d or insert a space between two characters with P = p i : The fox chased the rabbit The f ox cha sed therabbit .",
"We vary p d and p i to construct three datasets with different noise levels (LOW, MID, HIGH, see Table 3).",
"We note that there are more sophisticated ways of creating er-rors in text.",
"An example is Kasewa et al. (2018) who generate grammatical errors.",
"We leave the investigation of other methods for generating tokenization errors to future work.",
"Labeling.",
"As mentioned before, we either delete the space after a token with probability p d or insert a space between two of its characters with probability p i .",
"We assign the label from the original token to every sub-token created by space insertion.",
"For space deletions, we randomly choose one of the two original labels for training and evaluate against the union of them.",
"Figure 2 shows an example.",
"The fox chased the rabbit DET NOUN VERB DET NOUN The f ox cha sed therabbit DET NOUN NOUN VERB VERB {DET|NOUN} Figure 2: Example of label assignment.",
"Baseline.",
"We compare our joint model to a traditional pipeline of tokenizer (UDpipe 1.0) 5 and token-level POS tagger (MarMot).",
"6 We re-train MarMot on the corrupted datasets.",
"Evaluation.",
"We evaluate the models on the noisy datasets using two different metrics:",
"(i) tokenization and joint token-POS F 1 as in Table 2, and",
"(ii) a relaxed variant of POS tag accuracies.",
"With the latter, we can assess the performance of MarMot without penalizing it for potential errors of UDpipe.",
"For calculating the relaxed accuracy, we count the POS tag of a gold token as correct if MarMot predicts the tag for any subpart of it.",
"5 http://lindat.mff.cuni.cz/services/udpipe/ 6 In contrast to Table 1 where we use gold tokens for MarMot.",
"We provide more details on the relaxed evaluation (description, examples and implementation) in our code repository.",
"Note that we apply the relaxed evaluation only to UDpipe+MarMot but not to our model.",
"The output of our model is directly evaluated against the gold labels of the clean corpus.",
"Results.",
"The performance of our model decreases only slightly when increasing the noise level while the performance of UDpipe+MarMot drops significantly (Table 4).",
"This confirms that our model is robust against noise from tokenization.",
"Note that most other character-based models would suffer from the same performance drop as MarMot since they rely on tokenized inputs.",
"Discussion.",
"The results in Table 4 show that our model can reliably recover token boundaries, even in noisy scenarios.",
"This also explains its strong performance across languages: It can handle different languages, independent of whether the language merges tokens without whitespaces (e.g., Chinese) or separates tokens with whitespaces into syllables (e.g., Vietnamese).",
"Character-based POS Tagging.",
"Most work uses characters only to build more robust token representations but still relies on external tokenizers (Santos and Zadrozny, 2014; Lample et al., 2016; Plank et al., 2016; Dozat et al., 2017; Liu et al., 2017).",
"In contrast, our model jointly learns segmentation and POS tagging.",
"Gillick et al. (2016) do not rely on tokenization either but in contrast to their greedy decoder, our model optimizes the whole output sequence and is able to revise local decisions (Lafferty et al., 2001).",
"For processing characters, LSTMs (Lample et al., 2016; Plank et al., 2016; Dozat et al., 2017) or CNNs (Ma and Hovy, 2016; Yu et al., 2017) are used.",
"Our model combines biLSTMs and grConv to model both the context of characters (LSTM) and the compositionality of language (grConv).",
"Joint Segmentation and POS Tagging.",
"The top performing models of EN, JA, VI and ZH use a pipeline of tokenizer and word-based POS tagger but do not treat both tasks jointly (Bjorkelund et al., 2017; Dozat et al., 2017; Kanayama et al., 2017; Qian and Liu, 2017).",
"Especially for Chinese, there is a lot of work on joint word segmentation and POS tagging, e.g., (Zhang and Clark, 2008; Sun, 2011; Hatori et al., 2012; Zheng et al., 2013; Kong et al., 2015; Cai and Zhao, 2016; Chen et al., 2017; Shao et al., 2017), of which some use CRFs to predict one POS tag per character.",
"However, this is hard to transfer to languages like English and Vietnamese where single characters are less informative and tokens are much longer, resulting in a larger combinatory label space.",
"Thus, we choose a semi-Markov formalization to directly model segments.",
"Semi-Markov CRFs for Sequence Tagging.",
"Zhuo et al. (2016) and Ye and Ling (2018) apply semi-CRFs to word-level inputs for named entity recognition.",
"In contrast, we model character-based POS tagging.",
"Thus, the expected length of our character segments is considerably larger than the expected length of word-based segments for NER.",
"Kong et al. (2015) build SRNNs that we use as a baseline.",
"In contrast to their 0-order model, we train a 1-order semi-CRF to model dependencies between segment labels.",
"We presented an end-to-end model for character-based part-of-speech tagging that uses semi-Markov conditional random fields to jointly segment and label a sequence of characters.",
"Input representations and segment representations are trained parameters learned in end-to-end training by the neural network part of the model.",
"The model achieves state-of-the-art results on two benchmark datasets across several typologically diverse languages.",
"By corrupting the tokenization of the dataset, we show the robustness of our model, explaining its good performance on languages with difficult tokenization.",
"This work was funded by the European Research Council (ERC #740516).",
"We would like to thank the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"objective",
"result",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"objective",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"result",
"other",
"other"
] |
[
"Single document summarization is the task of producing a shorter version of a document while preserving its principal information content.",
"In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective.",
"We use our algorithm to train a neural summarization model on the CNN and DailyMail datasets and demonstrate experimentally that it outperforms state-of-the-art extractive and abstractive systems when evaluated automatically and by humans.",
"1 1 Introduction Automatic summarization has enjoyed wide popularity in natural language processing due to its potential for various information access applications.",
"Examples include tools which aid users navigate and digest web content (e.g., news, social media, product reviews), question answering, and personalized recommendation engines.",
"Single document summarization the task of producing a shorter version of a document while preserving its information content is perhaps the most basic of summarization tasks that have been identified over the years (see Nenkova and McKeown, 2011 for a comprehensive overview).",
"Modern approaches to single document summarization are data-driven, taking advantage of the success of neural network architectures and their ability to learn continuous features without recourse to preprocessing tools or linguistic annotations.",
"Abstractive summarization involves various text rewriting operations (e.g., substitution, deletion, reordering) and has been recently framed as a sequence-to-sequence problem (Sutskever et al., 2014).",
"Central in most approaches (Rush et al., 2015; Chen et al., 2016; Nallapati et al., 2016; See 1 Our code and data are available here: https://github. com/shashiongithub/Refresh . et al., 2017; Tan and Wan, 2017; Paulus et al., 2017) is an encoder-decoder architecture modeled by recurrent neural networks.",
"The encoder reads the source sequence into a list of continuous-space representations from which the decoder generates the target sequence.",
"An attention mechanism (Bahdanau et al., 2015) is often used to locate the region of focus during decoding.",
"Extractive systems create a summary by identifying (and subsequently concatenating) the most important sentences in a document.",
"A few recent approaches (Cheng and Lapata, 2016; Nallapati et al., 2017; Narayan et al., 2017; Yasunaga et al., 2017) conceptualize extractive summarization as a sequence labeling task in which each label specifies whether each document sentence should be included in the summary.",
"Existing models rely on recurrent neural networks to derive a meaning representation of the document which is then used to label each sentence, taking the previously labeled sentences into account.",
"These models are typically trained using cross-entropy loss in order to maximize the likelihood of the ground-truth labels and do not necessarily learn to rank sentences based on their importance due to the absence of a ranking-based objective.",
"Another discrepancy comes from the mismatch between the learning objective and the evaluation criterion, namely ROUGE (Lin and Hovy, 2003), which takes the entire summary into account.",
"In this paper we argue that cross-entropy training is not optimal for extractive summarization.",
"Models trained this way are prone to generating verbose summaries with unnecessarily long sentences and redundant information.",
"We propose to overcome these difficulties by globally optimizing the ROUGE evaluation metric and learning to rank sentences for summary generation through a reinforcement learning objective.",
"Similar to previous work (Cheng and Lapata, 2016; Narayan et al., 2017; Nallapati et al., 2017), our neural summarization model consists of a hierarchical docu-1747 ment encoder and a hierarchical sentence extractor.",
"During training, it combines the maximum-likelihood cross-entropy loss with rewards from policy gradient reinforcement learning to directly optimize the evaluation metric relevant for the summarization task.",
"We show that this global optimization framework renders extractive models better at discriminating among sentences for the final summary; a sentence is ranked high for selection if it often occurs in high scoring summaries.",
"We report results on the CNN and DailyMail news highlights datasets (Hermann et al., 2015) which have been recently used as testbeds for the evaluation of neural summarization systems.",
"Experimental results show that when evaluated automatically (in terms of ROUGE), our model outperforms state-of-the-art extractive and abstractive systems.",
"We also conduct two human evaluations in order to assess",
"(a) which type of summary participants prefer (we compare extractive and abstractive systems) and",
"(b) how much key information from the document is preserved in the summary (we ask participants to answer questions pertaining to the content in the document by reading system summaries).",
"Both evaluations overwhelmingly show that human subjects find our summaries more informative and complete.",
"Our contributions in this work are three-fold: a novel application of reinforcement learning to sentence ranking for extractive summarization; corroborated by analysis and empirical results showing that cross-entropy training is not well-suited to the summarization task; and large scale user studies following two evaluation paradigms which demonstrate that state-of-the-art abstractive systems lag behind extractive ones when the latter are globally trained.",
"Given a document D consisting of a sequence of sentences ( s 1 , s 2 ,..., s n ) , an extractive summarizer aims to produce a summary S by selecting m sentences from D (where m < n ).",
"For each sentence s i D , we predict a label y i { 0 , 1 } (where 1 means that s i should be included in the summary) and assign a score p ( y i | s i , D , ) quantifying s i 's relevance to the summary.",
"The model learns to assign p ( 1 | s i , D , ) > p ( 1 | s j , D , ) when sentence s i is more relevant than s j .",
"Model parameters are denoted by .",
"We estimate p ( y i | s i , D , ) using a neural network model and assemble a summary S by selecting m sentences with top p ( 1 | s i , D , ) scores.",
"Our architecture resembles those previously proposed in the literature (Cheng and Lapata, 2016; Nallapati et al., 2017; Narayan et al., 2017).",
"The main components include a sentence encoder, a document encoder, and a sentence extractor (see the left block of Figure 1) which we describe in more detail below.",
"Sentence Encoder A core component of our model is a convolutional sentence encoder which encodes sentences into continuous representations.",
"In recent years, CNNs have proven useful for various NLP tasks (Collobert et al., 2011; Kim, 2014; Kalchbrenner et al., 2014; Zhang et al., 2015; Lei et al., 2015; Kim et al., 2016; Cheng and Lapata, 2016) because of their effectiveness in identifying salient patterns in the input (Xu et al., 2015).",
"In the case of summarization, CNNs can identify named-entities and events that correlate with the gold summary.",
"We use temporal narrow convolution by applying a kernel filter K of width h to a window of h words in sentence s to produce a new feature.",
"This filter is applied to each possible window of words in s to produce a feature map f R k h + 1 where k is the sentence length.",
"We then apply max-pooling over time over the feature map f and take the maximum value as the feature corresponding to this particular filter K .",
"We use multiple kernels of various sizes and each kernel multiple times to construct the representation of a sentence.",
"In Figure 1, kernels of size 2 (red) and 4 (blue) are applied three times each.",
"Max-pooling over time yields two feature lists f K 2 and f K 4 R 3 .",
"The final sentence embeddings have six dimensions.",
"Document Encoder The document encoder composes a sequence of sentences to obtain a document representation.",
"We use a recurrent neural network with Long Short-Term Memory (LSTM) cells to avoid the vanishing gradient problem when training long sequences (Hochreiter and Schmid-huber, 1997).",
"Given a document D consisting of a sequence of sentences ( s 1 , s 2 ,..., s n ) , we follow common practice and feed sentences in reverse order (Sutskever et al., 2014; Li et al., 2015; Filip-pova et al., 2015; Narayan et al., 2017).",
"This way we make sure that the network also considers the top sentences of the document which are particularly important for summarization (Rush et al., 2015; Nallapati et al., 2016).",
"Sentence Extractor Our sentence extractor sequentially labels each sentence in a document with 1 (relevant for the summary) or 0 (otherwise).",
"It is implemented with another RNN with LSTM cells and a softmax layer.",
"At time t i , it reads sentence s i and makes a binary prediction, conditioned on the document representation (obtained from the document encoder) and the previously labeled sentences.",
"This way, the sentence extractor is able to identify locally and globally important sentences within the document.",
"We rank the sentences in a document D by p ( y i = 1 | s i , D , ) , the confidence scores assigned by the softmax layer of the sentence extractor.",
"We learn to rank sentences by training our network in a reinforcement learning framework, directly optimizing the final evaluation metric, namely ROUGE (Lin and Hovy, 2003).",
"Before we describe our training algorithm, we elaborate on why the maximum-likelihood cross-entropy objective could be deficient for ranking sentences for summarization (Section 3).",
"Then, we define our reinforcement learning objective in Section 4 and show that our new way of training allows the model to better discriminate amongst sentences, i.e., a sentence is ranked higher for selection if it often occurs in high scoring summaries.",
"Previous work optimizes summarization models by maximizing p ( y | D , ) = ni = 1 p ( y i | s i , D , ) , the likelihood of the ground-truth labels y = ( y 1 , y 2 ,..., y n ) for sentences ( s 1 , s 2 ,..., s n ) , given document D and model parameters .",
"This objective can be achieved by minimizing the cross-entropy loss at each decoding step: L ( ) = n i = 1 log p ( y i | s i , D , ) .",
"Cross-entropy training leads to two kinds of discrepancies in the model.",
"The first discrepancy comes from the disconnect between the task definition and the training objective.",
"While MLE in Equation (1) aims to maximize the likelihood of the ground-truth labels, the model is",
"(a) expected to rank sentences to generate a summary and",
"(b) evaluated using ROUGE at test time.",
"The second discrepancy comes from the reliance on ground-truth labels.",
"Document collections for training summarization systems do not naturally contain labels indicating which sentences should be extracted.",
"Instead, they are typically accompanied by abstractive summaries from which sentence-level labels are extrapolated.",
"Cheng and Lapata (2016) follow Woodsend and Lapata (2010) in adopting a rule-based method which assigns labels to each sentence in the document individually based on their semantic correspondence with the gold summary (see the fourth column in Table 1).",
"An alternative method (Svore et al., 2007; Cao et al., 2016; Nallapati et al., 2017) iden-tifies the set of sentences which collectively gives the highest ROUGE with respect to the gold summary.",
"Sentences in this set are labeled with 1 and 0 otherwise (see the column 5 in Table 1).",
"overfit the data.",
"For example, the document in Table 1 has 12 positively labeled sentences out of 31 in total (only first 10 are shown).",
"Collective labels present a better alternative since they only pertain to the few sentences deemed most suitable to form the summary.",
"However, a model trained with cross-entropy loss on collective labels will under-fit the data as it will only maximize probabilities p ( 1 | s i , D , ) for sentences in this set (e.g., sentences { 0 , 11 , 13 } in Table 1) and ignore all other sentences.",
"We found that there are many candidate summaries with high ROUGE scores which could be considered during training.",
"Table 1 (last column) shows candidate summaries ranked according to the mean of ROUGE-1, ROUGE-2, and ROUGE-L F 1 scores.",
"Interestingly, multiple top ranked summaries have reasonably high ROUGE scores.",
"For example, the average ROUGE for the summaries ranked second (0,13), third (11,13), and fourth (0,1,13) is 57.5%, 57.2%, and 57.1%, and all top 16 summaries have ROUGE scores more or equal to 50%.",
"A few sentences are indicative of important content and appear frequently in the summaries: sentence 13 occurs in all summaries except one, while sentence 0 appears in several summaries too.",
"Also note that summaries (11,13) and (1,13) yield better ROUGE scores compared to longer summaries, and may be as informative, yet more concise, alternatives.",
"These discrepancies render the model less ef-ficient at ranking sentences for the summarization task.",
"Instead of maximizing the likelihood of the ground-truth labels, we could train the model to predict the individual ROUGE score for each sentence in the document and then select the top m sentences with highest scores.",
"But sentences with individual ROUGE scores do not necessarily lead to a high scoring summary, e.g., they may convey overlapping content and form verbose and redundant summaries.",
"For example, sentence 3, despite having a high individual ROUGE score (35.6%), does not occur in any of the top 5 summaries.",
"We next explain how we address these issues using reinforcement learning.",
"Reinforcement learning (Sutton and Barto, 1998) has been proposed as a way of training sequence-1750",
"to-sequence generation models in order to directly optimize the metric used at test time, e.g., BLEU or ROUGE (Ranzato et al., 2015).",
"We adapt reinforcement learning to our formulation of extractive summarization to rank sentences for summary generation.",
"We propose an objective function that combines the maximum-likelihood cross-entropy loss with rewards from policy gradient reinforcement learning to globally optimize ROUGE.",
"Our training algorithm allows to explore the space of possible summaries, making our model more robust to unseen data.",
"As a result, reinforcement learning helps extractive summarization in two ways:",
"(a) it directly optimizes the evaluation metric instead of maximizing the likelihood of the ground-truth labels and",
"(b) it makes our model better at discriminating among sentences; a sentence is ranked high for selection if it often occurs in high scoring summaries.",
"We cast the neural summarization model introduced in Figure 1 in the Reinforcement Learning paradigm (Sutton and Barto, 1998).",
"Accordingly, the model can be viewed as an agent which interacts with an environment consisting of documents.",
"At first, the agent is initialized randomly, it reads document D and predicts a relevance score for each sentence s i D using policy p ( y i | s i , D , ) , where are model parameters.",
"Once the agent is done reading the document, a summary with labels y is sampled out of the ranked sentences.",
"The agent is then given a reward r commensurate with how well the extract resembles the gold-standard summary.",
"Specifically, as reward function we use mean F 1 of ROUGE-1, ROUGE-2, and ROUGE-L.",
"Unigram and bigram overlap (ROUGE-1 and ROUGE-2) are meant to assess informativeness, whereas the longest common subsequence (ROUGE-L) is meant to assess fluency.",
"We update the agent using the REINFORCE algorithm (Williams, 1992) which aims to minimize the negative expected reward: L ( ) = E y p [ r ( y )] (2) where, p stands for p ( y | D , ) .",
"REINFORCE is based on the observation that the expected gradient of a non-differentiable reward function (ROUGE, in our case) can be computed as follows: L ( ) = E y p [ r ( y ) log p ( y | D , )] (3) While MLE in Equation (1) aims to maximize the likelihood of the training data, the objective in Equation (2) learns to discriminate among sentences with respect to how often they occur in high scoring summaries.",
"Computing the expectation term in Equation (3) is prohibitive, since there is a large number of possible extracts.",
"In practice, we approximate the expected gradient using a single sample y from p for each training example in a batch: L ( ) r ( y ) log p ( y | D , ) r ( y ) n i = 1 log p ( y i | s i , D , ) (4) Presented in its original form, the REINFORCE algorithm starts learning with a random policy which can make model training challenging for complex tasks like ours where a single document can give rise to a very large number of candidate summaries.",
"We therefore limit the search space of y in Equation (4) to the set of largest probability samples Y .",
"We approximate Y by the k extracts which receive highest ROUGE scores.",
"More concretely, we assemble candidate summaries efficiently by first selecting p sentences from the document which on their own have high ROUGE scores.",
"We then generate all possible combinations of p sentences subject to maximum length m and evaluate them against the gold summary.",
"Summaries are ranked according to F 1 by taking the mean of ROUGE-1, ROUGE-2, and ROUGE-L.",
"Y contains these top k candidate summaries.",
"During training, we sample y from Y instead of p ( y | D , ) .",
"Ranzato et al. (2015) proposed an alternative to REINFORCE called MIXER (Mixed Incremental Cross-Entropy Reinforce) which first pretrains the model with the cross-entropy loss using ground truth labels and then follows a curriculum learning strategy (Bengio et al., 2015) to gradually teach the model to produce stable predictions on its own.",
"In our experiments MIXER performed worse than the model of Nallapati et al. (2017) just trained on collective labels.",
"We conjecture that this is due to the unbounded nature of our ranking problem.",
"Recall that our model assigns relevance scores to sentences rather than words.",
"The space of sentential representations is vast and fairly unconstrained compared to other prediction tasks operating with fixed vocabularies (Li et al., 2016; Paulus et al., 2017; Zhang and Lapata, 2017).",
"Moreover, our approximation of the gradient allows the model to 1751 converge much faster to an optimal policy.",
"Advantageously, we do not require an online reward estimator, we pre-compute Y , which leads to a significant speedup during training compared to MIXER (Ranzato et al., 2015) and related training schemes (Shen et al., 2016).",
"In this section we present our experimental setup for assessing the performance of our model which we call REFRESH as a shorthand for RE in F o R cement Learning-based E xtractive S ummarization.",
"We describe our datasets, discuss implementation details, our evaluation protocol, and the systems used for comparison.",
"Summarization Datasets We evaluated our models on the CNN and DailyMail news highlights datasets (Hermann et al., 2015).",
"We used the standard splits of Hermann et al. (2015) for training, validation, and testing (90,266/1,220/1,093 documents for CNN and 196,961/12,148/10,397 for DailyMail).",
"We did not anonymize entities or lower case tokens.",
"We followed previous studies (Cheng and Lapata, 2016; Nallapati et al., 2016, 2017; See et al., 2017; Tan and Wan, 2017) in assuming that the story highlights associated with each article are gold-standard abstractive summaries.",
"During training we use these to generate high scoring extracts and to estimate rewards for them, but during testing, they are used as reference summaries to evaluate our models.",
"Implementation Details We generated extracts by selecting three sentences ( m = 3) for CNN articles and four sentences ( m = 4) for DailyMail articles.",
"These decisions were informed by the fact that gold highlights in the CNN/DailyMail validation sets are 2.6/4.2 sentences long.",
"For both datasets, we estimated high-scoring extracts using 10 document sentences ( p = 10) with highest ROUGE scores.",
"We tuned the initialization parameter k for Y on the validation set: we found that our model performs best with k = 5 for the CNN dataset and k = 15 for the DailyMail dataset.",
"We used the One Billion Word Benchmark corpus (Chelba et al., 2013) to train word embeddings with the skip-gram model (Mikolov et al., 2013) using context window size 6, negative sampling size 10, and hierarchical softmax",
"1. Known words were initialized with pre-trained embeddings of size 200.",
"Embeddings for unknown words were initialized to zero, but estimated during training.",
"LEAD A SkyWest Airlines flight made an emergency landing in Buffalo, New York, on Wednesday after a passenger lost consciousness, officials said.",
"The passenger received medical attention before being released, according to Marissa Snow, spokeswoman for SkyWest.",
"She said the airliner expects to accommodate the 75 passengers on another aircraft to their original destination Hartford, Connecticut later Wednesday afternoon.",
"S ee e t a l .",
"Skywest Airlines flight made an emergency landing in Buffalo, New York, on Wednesday after a passenger lost consciousness.",
"She said the airliner expects to accommodate the 75 passengers on another aircraft to their original destination Hartford, Connecticut.",
"REFRESH A SkyWest Airlines flight made an emergency landing in Buffalo, New York, on Wednesday after a passenger lost consciousness, officials said.",
"The passenger received medical attention before being released, according to Marissa Snow, spokeswoman for SkyWest.",
"The Federal Aviation Administration initially reported a pressurization problem and said it would investigate.",
"GOLD FAA backtracks on saying crew reported a pressurization problem One passenger lost consciousness The plane descended 28,000 feet in three minutes Q 1 Who backtracked on saying crew reported a pressurization problem?",
"( FAA )",
"Q 2 How many passengers lost consciousness in the incident?",
"( One )",
"Q 3 How far did the plane descend in three minutes?",
"( 28,000 feet )",
"Sentences were padded with zeros to a length of 100.",
"For the sentence encoder, we used a list of kernels of widths 1 to 7, each with output chan-nel size of 50 (Kim et al., 2016).",
"The sentence embedding size in our model was 350.",
"For the recurrent neural network component in the document encoder and sentence extractor, we used a single-layered LSTM network with size 600.",
"All input documents were padded with zeros to a maximum document length of 120.",
"We performed minibatch cross-entropy training with a batch size of 20 documents for 20 training epochs.",
"It took around 12 hrs on a single GPU to train.",
"After each epoch, we evaluated our model on the validation set and chose the best performing model for the test set.",
"During training we used the Adam optimizer (Kingma and Ba, 2015) with initial learning rate 0 .",
"001.",
"Our system is implemented in TensorFlow (Abadi et al., 2015).",
"Evaluation We evaluated summarization quality using F 1 ROUGE (Lin and Hovy, 2003).",
"We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence 1752 (ROUGE-L) as a means of assessing fluency.",
"2 We compared REFRESH against a baseline which simply selects the first m leading sentences from each document (LEAD ) and two neural models similar to ours (see left block in Figure 1), both trained with cross-entropy loss.",
"Cheng and Lapata (2016) train on individual labels, while Nallapati et al. (2017) use collective labels.",
"We also compared our model against the abstractive systems of Chen et al. (2016), Nallapati et al. (2016), See et al. (2017), and Tan and Wan (2017).",
"3 In addition to ROUGE which can be misleading when used as the only means to assess the informativeness of summaries (Schluter, 2017), we also evaluated system output by eliciting human judgments in two ways.",
"In our first experiment, participants were presented with a news article and summaries generated by three systems: the LEAD baseline, abstracts from See et al. (2017), and extracts from REFRESH .",
"We also included the human-authored highlights.",
"4 Participants read the articles and were asked to rank the summaries from best (1) to worst (4) in order of informativeness (does the summary capture important information in the article?) and fluency (is the summary written in well-formed English?).",
"We did not allow any ties.",
"We randomly selected 10 articles from the CNN test set and 10 from the DailyMail test set.",
"The study was completed by five participants, all native or proficient English speakers.",
"Each participant was presented with the 20 articles.",
"The order of summaries to rank was randomized per article and the order of articles per participant.",
"Examples of summaries our subjects ranked are shown in Figure",
"2. Our second experiment assessed the degree to which our model retains key information from the document following a question-answering (QA) paradigm which has been previously used to evaluate summary quality and text compression (Mor-2 We used pyrouge, a Python package, to compute all ROUGE scores with parameters -a -c 95 -m -n 4 -w 1.2. 3 Cheng and Lapata (2016) report ROUGE recall scores on the DailyMail dataset only.",
"We used their code ( https:// github.com/cheng6076/NeuralSum ) to produce ROUGE F 1 scores on both CNN and DailyMail datasets.",
"For other systems, all results are taken from their papers.",
"4 We are grateful to Abigail See for providing us with the output of her system.",
"We did not include output from Nallapati et al. (2017), Chen et al. (2016), Nallapati et al. (2016), or Tan and Wan (2017) in our human evaluation study, as these models are trained on a named-entity anonymized version of the CNN and DailyMail datasets, and as result produce summaries which are not comparable to ours.",
"We did not include extracts from Cheng and Lapata (2016) either as they were significantly inferior to LEAD (see Table 2).",
"ris et al., 1992; Mani et al., 2002; Clarke and Lapata, 2010).",
"We created a set of questions based on the gold summary under the assumption that it highlights the most important document content.",
"We then examined whether participants were able to answer these questions by reading system summaries alone without access to the article.",
"The more questions a system can answer, the better it is at summarizing the document as a whole.",
"We worked on the same 20 documents used in our first elicitation study.",
"We wrote multiple fact-based question-answer pairs for each gold summary without looking at the document.",
"Questions were formulated so as to not reveal answers to subsequent questions.",
"We created 71 questions in total varying from two to six questions per gold summary.",
"Example questions are given in Figure",
"2. Participants read the summary and answered all associated questions as best they could without access to the original document or the gold summary.",
"Subjects were shown summaries from three systems: the LEAD baseline, the abstractive system of See et al. (2017), and REFRESH .",
"Five participants answered questions for each summary.",
"We used the same scoring mechanism from Clarke and Lapata (2010), i.e., a correct answer was marked with a score of one, partially correct answers with a score of 0.5, and zero otherwise.",
"The final score for a system is the average of all its question scores.",
"Answers were elicited using Amazon's Mechanical Turk crowdsourcing platform.",
"We uploaded data in batches (one system at a time) on Mechanical Turk to ensure that same participant does not evaluate summaries from different systems on the same set of questions.",
"We report results using automatic metrics in Table",
"2. The top part of the table compares REFRESH against related extractive systems.",
"The bottom part reports the performance of abstractive systems.",
"We present three variants of LEAD , one is computed by ourselves and the other two are reported in Nallapati et al. (2017) and See et al. (2017).",
"Note that they vary slightly due to differences in the preprocessing of the data.",
"We report results on the CNN and DailyMail datasets and their combination (CNN + DailyMail).",
"Cross-Entropy vs Reinforcement Learning The results in Table 2 show that REFRESH is superior to our LEAD baseline and extractive systems across datasets and metrics.",
"It outperforms 1753 Models CNN DailyMail CNN + DailyMail R1 R2 RL R1 R2 RL R1 R2 RL LEAD (ours) 29.1 11.1 25.9 40.7 18.3 37.2 39.6 17.7 36.2 LEAD (Nallapati et al., 2017) 39.2 15.7 35.5 LEAD (See et al., 2017) 40.3 17.7 36.6 Cheng and Lapata (2016) 28.4 10.0 25.0 36.2 15.2 32.9 35.5 14.7 32.2 Nallapati et al. (2017) 39.6 16.2 35.3 REFRESH 30.4 11.7 26.9 41.0 18.8 37.7 40.0 18.2 36.6 Chen et al. (2016) 27.1 8.2 18.7 Nallapati et al. (2016) 35.4 13.3 32.6 See et al. (2017) 39.5 17.3 36.4 Tan and Wan (2017) 30.3 9.8 20.0 38.1 13.9 34.0 Table 2: Results on the CNN and DailyMail test sets.",
"the extractive system of Cheng and Lapata (2016) which is trained on individual labels.",
"REFRESH is not directly comparable with Nallapati et al. (2017) as they generate anonymized summaries.",
"Their system lags behind their LEAD baseline on ROUGE-L on the CNN+DailyMail dataset (35.5% vs 35.3%).",
"Also note that their model is trained on collective labels and has a significant lead over Cheng and Lapata (2016).",
"As discussed in Section 3 cross-entropy training on individual labels tends to overgenerate positive labels leading to less informative and verbose summaries.",
"Extractive vs Abstractive Systems Our automatic evaluation results further demonstrate that REFRESH is superior to abstractive systems (Chen et al., 2016; Nallapati et al., 2016; See et al., 2017; Tan and Wan, 2017) which are all variants of an encoder-decoder architecture (Sutskever et al., 2014).",
"Despite being more faithful to the actual summarization task (hand-written summaries combine several pieces of information from the original document), abstractive systems lag behind the LEAD baseline.",
"Tan and Wan (2017) present a graph-based neural model, which manages to outperform LEAD on ROUGE-1 but falters when higher order ROUGE scores are used.",
"Amongst abstractive systems See et al. (2017) perform best.",
"Interestingly, their system is mostly extractive, exhibiting a small degree of rewriting; it copies more than 35% of the sentences in the source document, 85% of 4-grams, 90% of 3-grams, 95% of bigrams, and 99% of unigrams.",
"Human Evaluation: System Ranking Table 3 shows, proportionally, how often participants ranked each system, 1st, 2nd, and so on.",
"Perhaps unsurprisingly human-authored summaries are considered best (and ranked 1st 39% of the Models 1st 2nd 3rd 4th QA LEAD 0.11 0.21 0.34 0.33 36.33 See et al. (2017) 0.14 0.18 0.31 0.36 28.73 REFRESH 0.35 0.42 0.16 0.07 66.34 GOLD 0.39 0.19 0.18 0.24 Table 3: System ranking and QA-based evaluations.",
"time).",
"REFRESH is ranked 2nd best followed by LEAD and See et al. (2017) which are mostly ranked in 3rd and 4th places.",
"We carried out pairwise comparisons between all models in Table 3 to assess whether system differences are statistically significant.",
"There is no significant difference between LEAD and See et al. (2017), and REFRESH and GOLD (using a one-way ANOVA with posthoc Tukey HSD tests; p < 0 . 01).",
"All other differences are statistically significant.",
"Human Evaluation: Question Answering The results of our QA evaluation are shown in the last column of Table",
"3. Based on summaries generated by REFRESH , participants can answer 66.34% of questions correctly.",
"Summaries produced by LEAD and the abstractive system of See et al. (2017) provide answers for 36.33% and 28.73% of the questions, respectively.",
"Differences between systems are all statistically significant ( p < 0 . 01) with the exception of LEAD and See et al. (2017).",
"Although the QA results in Table 3 follow the same pattern as ROUGE in Table 2, differences among systems are now greatly amplified.",
"QA-based evaluation is more focused and a closer re-flection of users' information need (i.e., to find out what the article is about), whereas ROUGE simply captures surface similarity (i.e., n -gram overlap) 1754 between output summaries and their references.",
"Interestingly, LEAD is considered better than See et al. (2017) in the QA evaluation, whereas we find the opposite when participants are asked to rank systems.",
"We hypothesize that LEAD is indeed more informative than See et al. (2017) but humans prefer shorter summaries.",
"The average length of LEAD summaries is 105.7 words compared to 61.6 for See et al. (2017).",
"Traditional summarization methods manually define features to rank sentences for their salience in order to identify the most important sentences in a document or set of documents (Kupiec et al., 1995; Mani, 2001; Radev et al., 2004; Filatova and Hatzivassiloglou, 2004; Nenkova et al., 2006; Sparck Jones, 2007).",
"A vast majority of these methods learn to score each sentence independently (Barzilay and Elhadad, 1997; Teufel and Moens, 1997; Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Shen et al., 2007; Schilder and Kondadadi, 2008; Wan, 2010) and a summary is generated by selecting top-scored sentences in a way that is not incorporated into the learning process.",
"Summary quality can be improved heuristically, (Yih et al., 2007), via max-margin methods (Carbonell and Goldstein, 1998; Li et al., 2009), or integer-linear programming (Woodsend and Lapata, 2010; Berg-Kirkpatrick et al., 2011; Woodsend and Lapata, 2012; Almeida and Martins, 2013; Parveen et al., 2015).",
"Recent deep learning methods (Kageback et al., 2014; Yin and Pei, 2015; Cheng and Lapata, 2016; Nallapati et al., 2017) learn continuous features without any linguistic preprocessing (e.g., named entities).",
"Like traditional methods, these approaches also suffer from the mismatch between the learning objective and the evaluation criterion (e.g., ROUGE) used at the test time.",
"In comparison, our neural model globally optimizes the ROUGE evaluation metric through a reinforcement learning objective: sentences are highly ranked if they occur in highly scoring summaries.",
"Reinforcement learning has been previously used in the context of traditional multi-document summarization as a means of selecting a sentence or a subset of sentences from a document cluster.",
"Ryang and Abekawa (2012) cast the sentence selection task as a search problem.",
"Their agent observes a state (e.g., a candidate summary), executes an action (a transition operation that produces a new state selecting a not-yet-selected sen-tence), and then receives a delayed reward based on tf idf.",
"Follow-on work (Rioux et al., 2014) extends this approach by employing ROUGE as part of the reward function, while Hen et al. (2015) further experiment with Q -learning.",
"Molla-Aliod (2017) has adapt this approach to query-focused summarization.",
"Our model differs from these approaches both in application and formulation.",
"We focus solely on extractive summarization, in our case states are documents (not summaries) and actions are relevance scores which lead to sentence ranking (not sentence-to-sentence transitions).",
"Rather than employing reinforcement learning for sentence selection, our algorithm performs sentence ranking using ROUGE as the reward function.",
"The REINFORCE algorithm (Williams, 1992) has been shown to improve encoder-decoder text-rewriting systems by allowing to directly optimize a non-differentiable objective (Ranzato et al., 2015; Li et al., 2016; Paulus et al., 2017) or to inject task-specific constraints (Zhang and Lapata, 2017; Nogueira and Cho, 2017).",
"However, we are not aware of any attempts to use reinforcement learning for training a sentence ranker in the context of extractive summarization.",
"In this work we developed an extractive summarization model which is globally trained by optimizing the ROUGE evaluation metric.",
"Our training algorithm explores the space of candidate summaries while learning to optimize a reward function which is relevant for the task at hand.",
"Experimental results show that reinforcement learning offers a great means to steer our model towards generating informative, fluent, and concise summaries outperforming state-of-the-art extractive and abstractive systems on the CNN and DailyMail datasets.",
"In the future we would like to focus on smaller discourse units (Mann and Thompson, 1988) rather than individual sentences, modeling compression and extraction jointly.",
"Acknowledgments We gratefully acknowledge the support of the European Research Council (Lapata; award number 681760), the European Union under the Horizon 2020 SUMMA project (Narayan, Cohen; grant agreement 688139), and Huawei Technologies (Cohen)."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"result",
"method",
"method",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"other",
"method",
"objective",
"objective",
"result",
"method",
"abstain"
] |
[
"Publicly traded companies are required to submit periodic reports with eXtensive Business Reporting Language ( xbrl ) word-level tags.",
"Manually tagging the reports is tedious and costly.",
"We, therefore, introduce xbrl tagging as a new entity extraction task for the financial domain and release f i ner -139, a dataset of 1.1M sentences with gold xbrl tags.",
"Unlike typical entity extraction datasets, f i ner 139 uses a much larger label set of 139 entity types.",
"Most annotated tokens are numeric, with the correct tag per token depending mostly on context, rather than the token itself.",
"We show that subword fragmentation of numeric expressions harms bert 's performance, allowing word-level bilstm s to perform better.",
"To improve bert 's performance, we propose two simple and e ective solutions that replace numeric expressions with pseudo-tokens reflecting original token shapes and numeric magnitudes.",
"We also experiment with fin bert , an existing bert model for the financial domain, and release our own bert ( sec bert ), pre-trained on financial filings, which performs best.",
"Through data and error analysis, we finally identify possible limitations to inspire future work on xbrl tagging.",
"Natural language processing ( nlp ) for finance is an emerging research area (Hahn et al., 2019; Chen et al., 2020; El-Haj et al., 2020).",
"Financial data are mostly reported in tables,but substantial information can also be found in textual form, e.g., in company filings, analyst reports, and economic news.",
"Such information is useful in numerous financial intelligence tasks, like stock market prediction (Chen et al., 2019; Yang et al., 2019), financial sentiment analysis (Malo et al., 2014; Wang et al., Source code: https://github.com/nlpaueb/finer Correspondence: [email protected] Figure 1: Sentences from f i ner -139, with xbrl tags on numeric and non-numeric tokens. xbrl tags are actually xml -based and most tagged tokens are numeric. 2013; Akhtar et al., 2017), economic event detection (Ein-Dor et al., 2019; Jacobs et al., 2018; Zhai and Zhang, 2019), and causality analysis (Tabari et al., 2018; Izumi and Sakaji, 2019).",
"In this work, we study how financial reports can be automatically enriched with word-level tags from the eXtensive Business Reporting Language ( xbrl ), a tedious and costly task not considered so far.",
"1 To promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports.",
"These comprise multiple sections, including financial tables and text paragraphs, called text notes .",
"In addition, legislation in the us , the uk , the eu and elsewhere requires the reports to be annotated with tags of xbrl , an xml -based language, to facilitate the processing of financial information.",
"The annotation of tables can be easily achieved by using company-specific pre-tagged table templates, since the structure and contents of the tables in the reports of a particular company rarely change.",
"On the other hand, the unstructured and dynamic nature of text notes (Figure 1) makes adding xbrl tags to them much more di cult.",
"Hence, we focus on automatically tagging text notes.",
"Tackling this task could facilitate the annotation of new and old reports (which may not include xbrl tags), e.g., 1 See https://www.xbrl.org/the-standard/what/ an-introduction-to-xbrl/ for an introduction to xbrl .",
"Towards this direction, we release f i ner -139, a new dataset of 1.1M sentences with gold xbrl tags, from annual and quarterly reports of publicly traded companies obtained from the us Securities and Exchange Commission ( sec ).",
"Unlike other entity extraction tasks, like named entity recognition ( ner ) or contract element extraction (Table 1), which typically require identifying entities of a small set of common types (e.g., persons, organiza-tions), xbrl defines approx.",
"6k entity types.",
"As a first step, we consider the 139 most frequent xbrl entity types, still a much larger label set than usual.",
"Another important di erence from typical entity extraction is that most tagged tokens ( 91%) in the text notes we consider are numeric, with the correct tag per token depending mostly on context, not the token itself (Figure 1).",
"The abundance of numeric tokens also leads to a very high ratio of out-of-vocabulary ( oov ) tokens, approx.",
"10.4% when using a custom word 2 vec (Mikolov et al., 2013a) model trained on our corpus.",
"When using subwords, e.g., in models like bert (Devlin et al., 2019), there are no oov tokens, but numeric expressions get excessively fragmented, making it di cult for the model to gather information from the fragments and correctly tag them all.",
"In our experiments, this is evident by the slightly better performance of stacked bilstm s (Graves et al., 2013; Lample et al., 2016) operating on word embeddings compared to bert .",
"The latter improves when using a crf (La erty et al., 2001) layer, which helps avoid assigning nonsensical sequences of labels to the fragments (subwords) of numeric expressions.",
"To further improve bert 's performance, we propose two simple and e ective solutions that replace numeric expressions with pseudo-tokens reflecting the original token shapes and magnitudes.",
"We also experiment with fin bert (Yang et al., 2020), an existing bert model for the financial domain, and release our own family of bert models, pretrained on 200k financial filings, achieving the best overall performance.",
"Our key contributions are: 1. We introduce xbrl tagging, a new financial nlp task for a real-world need, and we release f i ner -139, the first xbrl tagging dataset.",
"2 2. We provide extensive experiment bilstm s and bert with generic or in-domain pretraining, which establish strong baseline results for future work on f i ner -139.",
"3. We show that replacing numeric tokens with pseudo-tokens reflecting token shapes and magnitudes significantly boosts the performance of bert -based models in this task.",
"4. We release a new family of bert models ( sec bert , sec bert num , sec bert shape ) pre-trained on 200k financial filings that obtains the best results on f i ner -139.",
"3,4,5 2 Related Work Entity extraction: xbrl tagging di ers from ner and other previous entity extraction tasks (Table 1), like contract element extraction (Chalkidis et al., 2019).",
"Crucially, in xbrl tagging there is a much larger set of entity types (6k in full xbrl , 139 in f i ner -139), most tagged tokens are numeric ( 91%), and the correct tag highly depends on context.",
"In most ner datasets, numeric expressions are classified in generic entity types like amount' or date' (Bikel et al., 1999); this can often be achieved with regular expressions that look for common formats of numeric expressions, and the latter are often among the easiest entity types in ner datasets.",
"By contrast, although it is easy to figure out that the first three highlighted expressions of Figure 1 are amounts, assigning them the correct xbrl tags requires carefully considering their context.",
"Contract element extraction (Chalkidis et al., 2019) also requires considering the context of dates, amounts etc. to distinguish, for example, start dates from end dates, total amounts from other mentioned amounts, but the number of entity types in f i ner -139 is an order of magnitude larger (Table 1) and the full tag set of xbrl is even larger (6k).",
"2 https: // huggingface.co / datasets / nlpaueb / finer-139 3 https: // huggingface.co / nlpaueb / sec-bert-base 4 https: // huggingface.co / nlpaueb / sec-bert-num 5 https: // huggingface.co / nlpaueb / sec-bert-shape 4420 Financial ner : Previous financial ner applications use at most 9 (generic) class labels.",
"Salinas Alvarado et al. (2015) investigated ner in finance to recognize organizations, persons, locations, and miscellaneous entities on 8 manually annotated sec financial agreements using crf s.",
"Francis et al. (2019) experimented with transfer learning by unfreezing di erent layers of a bilstm with a crf layer, pre-trained on invoices, to extract 9 entity types with distinct morphological patterns (e.g., iban , company name, date, total amount).",
"Also, Hampton et al. (2015, 2016) applied a Maximum Entropy classifier, crf s, and handcrafted rules to London Stock Exchange filings to detect 9 generic entity types (e.g., person, organization, location, money, date, percentages).",
"Finally, Kumar et al. (2016) extended the work of Finkel et al. (2005) and built a financial entity recognizer of dates, numeric values, economic terms in sec and nonsec documents, using numerous handcrafted text features.",
"By contrast, f i ner -139 uses a specialized set of 139 highly technical economic tags derived from the real-world need of xbrl tagging, and we employ no handcrafted features.",
"Numerical reasoning: Neural numerical reasoning studies how to represent numbers to solve numeracy tasks, e.g., compare numbers, understand mathematical operations mentioned in a text etc.",
"Zhang et al. (2020) released n um bert , a Transformer-based model that handles numerical reasoning tasks by representing numbers by their scientific notation and applying subword tokenization.",
"On the other hand, g en bert (Geva et al., 2020) uses the decimal notation and digit-by-digit tokenization of numbers.",
"Both models attempt to deal with the problem that word-level tokenization often turns numeric tokens to oov s (Thawani et al., 2021).",
"This is important, because numerical reasoning requires modeling the exact value of each numeric token.",
"In f i ner -139, the correct xbrl tags of numeric tokens depend much more on their contexts and token shapes than on their exact numeric values (Fig. 1).",
"Hence, these methods are not directly relevant.",
"g en bert 's digit-by-digit tokenization would also lead to excessive fragmentation, which we experimentally find to harm performance.",
"Traditionally, business filings were simply rendered in plain text.",
"Thus, analysts and researchers needed to manually identify, copy, and paste each Figure 2: Frequency distribution of the 139 xbrl tags used in this work over the entire f i ner -139 dataset.",
"amount of interest (e.g., from filings to spread-sheets).",
"With xbrl -tagged filings, identifying and extracting amounts of interest (e.g., to spreadsheets or databases) can be automated.",
"More generally, xbrl facilitates the machine processing of financial documents.",
"Hence, xbrl -tagged financial reports are required in several countries, as already noted (Section 1).",
"However, manually tagging reports with xbrl tags is tedious and resource-intensive.",
"Therefore, we release f i ner -139 to fos-ter research towards automating xbrl tagging.",
"f i ner -139 was compiled from approx.",
"10k annual and quarterly English reports (filings) of publicly traded companies downloaded from sec 's edgar system.",
"6 The downloaded reports span a 5-year period, from 2016 to 2020.",
"They are annotated with xbrl tags by professional auditors and describe the performance and projections of the companies.",
"We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of xbrl tags in annual and quarterly reports.",
"xbrl taxonomies have many di erent attributes, making xbrl tagging challenging even for humans (Baldwin et al., 2006; Hoitash and Hoitash, 2018).",
"Furthermore, each jurisdiction has its own xbrl taxonomy.",
"Since we work with us documents, our labels come from us gaap .",
"7 Since this is the first e ort towards automatic xbrl tagging, we chose to work with the most essential and informative attribute, the tag names , which populate our label set.",
"Also, since xbrl tags change periodically, we selected the 139 (out of 6,008) most frequent xbrl tags with at least 1,000 appearances in f i ner -139.",
"The distribution of these tags seems to follow a power law (Figure 2), hence most of the 6k xbrl tags that we did not consider are very rare.",
"We used the iob 2 annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.",
"We split the text notes into 1.8M sentences, the majority of which ( 90%) contained no tags.",
"8 The sentences are also html -stripped, normalized, and lower-cased.",
"To avoid conflating trivial and more di cult cases, we apply heuristic rules to discard sentences that can be easily flagged as almost certainly requiring no tagging; in a real-life setting, the heuristics, possibly further improved, would discard sentences that do not need to be processed by the tagger.",
"The heuristic rules were created by inspecting the training subset and include regular expressions that look for amounts and other expressions that are typically annotated.",
"Approx.",
"40% of the 1.8M sentences were removed, discarding only 1% of tagged ones.",
"We split chronologically the remaining sentences into training, development, and test sets with an 80 / 10 / 10 ratio (Table 2).",
"spa C y (Honnibal et al., 2020) is an open-source nlp library.",
"9 It includes an industrial ner that uses word-level Bloom embeddings (Serr and Karat-zoglou, 2017) and residual Convolutional Neural Networks ( cnn s) (He et al., 2016).",
"We trained spa C y 's ner from scratch on f i ner -139.",
"bilstm : This baseline uses a stacked bidirectional Long-Short Term Memory ( lstm ) network (Graves et al., 2013; Lample et al., 2016) with residual connections.",
"Each token t i of a sentence S is mapped to an embedding and passed through the bilstm stack to extract the corresponding contextualized embedding.",
"A shared multinomial logistic regression ( lr ) layer operates on top of each contextualized embedding to predict the correct label.",
"We use the word 2 vec embeddings (Mikolov et al., 2013a,b) of Loukas et al. (2021).",
"bert : This is similar to bilstm , but now we fine-tune bert base (Devlin et al., 2019) to extract contextualized embeddings of subwords.",
"Again, a multinomial lr layer operates on top of the contextualized embeddings to predict the correct label of the corresponding subword.",
"crf s: In this case, we replace the lr layer of the previous two models with a Conditional Random Field ( crf ) layer (La erty et al., 2001), which has been shown to be beneficial in several token labeling tasks (Huang et al., 2015; Lample et al., 2016; Chalkidis et al., 2020b).",
"10 5 Baseline Results We report micro-F 1 ( -F 1 ) and macro-F 1 (m-F 1 ) at the entity level, i.e., if a gold tag annotates a multi-word span, a model gets credit only if it tags the exact same span.",
"This allows comparing more easily methods that label words vs. subwords.",
"Table 3 shows that spa C y performs poorly, possibly due to the di erences from typical token labeling tasks, i.e., the large amount of entity types, the abundance of numeric tokens, and the fact that in f i ner -139 the tagging decisions depend mostly on context.",
"Interestingly enough, bilstm (with word embeddings) performs slightly better than bert .",
"However, when a crf layer is added, bert achieves the best results, while the performance of bilstm (with word embeddings) deteriorates significantly, contradicting previous studies.",
"We hypothesize that the inconsistent e ect of crf s is due to tokenization di erences.",
"When using bert 's subword tokenizer, there are more decisions that need to be all correct for a tagged span to be correct (one decision per subword) than when using word tokenization (one decision per word).",
"Thus, it becomes more di cult for subword models to avoid nonsensical sequences of token labels, 10 We use a linear-chain crf layer with log-likelihood optimization and Viterbi decoding.",
"e.g., labeling two consecutive subwords as beginning and inside of di erent entity types, especially given the large set of 279 labels (Table 1).",
"The crf layer on top of subword models helps reduce the nonsensical sequences of labels.",
"On the other side, when using words as tokens, there are fewer opportunities for nonsensical label sequences, because there are fewer tokens.",
"For instance, the average number of subwords and words per gold span is 2.53 and 1.04, respectively.",
"Hence, it is easier for the bilstm to avoid predicting nonsensical sequences of labels and the crf layer on top of the bilstm (with word embeddings) has less room to contribute and mainly introduces noise (e.g., it often assigns low probabilities to acceptable, but less frequent label sequences).",
"With the crf layer, the model tries to maximize both the confidence of the bilstm for the predicted label of each word and the probability that the predicted sequence of labels is frequent.",
"When the bilstm on its own rarely predicts nonsensical sequences of labels, adding the crf layer rewards commonly seen sequences of labels, even if they are not the correct labels, without reducing the already rare nonsensical sequences of labels.",
"To further support our hypothesis, we repeated the bilstm experiments, but with subword (instead of word) embeddings, trained on the same vocabulary with bert .",
"Without the crf , the subword bilstm performs much worse than the word bil stm (6 p.p drop in -F 1 ), because of the many more decisions and opportunities to predict nonsensical label sequences.",
"The crf layer substantially improves the performance of the subword bilstm (4.9 p.p. increase in -F 1 ), as expected, though the word bilstm (without crf ) is still better, because of the fewer opportunities for nonsensical predictions.",
"A drawback of crf s is that they significantly slow down the models both during training and inference, especially when using large label sets (Goldman and Goldberger, 2020), as in our case.",
"Hence, although bert with crf was the best model in Table 3, we wished to improve bert 's performance further without employing crf s.",
"In f i ner -139, the majority (91.2%) of the gold tagged spans are numeric expressions, which cannot all be included in bert 's finite vocabulary; e.g., the token 9,323.0 ' is split into five subword units, [ 9 ', ##, ', ##323 ', ##. ', ##0 '] , while the token",
"12.78 ' is split into [ 12 ', ##. ', ##78 '] .",
"The excessive fragmentation of numeric expressions, when using subword tokenization, harms the performance of the subword-based models (Table 3), because it increases the probability of producing nonsensical sequences of labels, as already discussed.",
"We, therefore, propose two simple and e ective solutions to avoid the over-fragmentation of numbers.",
"bert + [ num ] : We detect numbers using regular expressions and replace each one with a single [ num ] pseudo-token, which cannot be split.",
"The pseudo-token is added to the bert vocabulary, and its representation is learned during fine-tuning.",
"This allows handling all numeric expressions in a uniform manner, disallowing their fragmentation.",
"bert + [ shape ] : We replace numbers with pseudo-tokens that cannot be split and represent the number's shape.",
"For instance, 53.2 ' becomes [XX . X] ', and 40,200.5 ' becomes [XX,XXX . X] '.",
"We use 214 special tokens that cover all the number shapes of the training set.",
"Again, the representations of the pseudo-tokens are fine-tuned, and numeric expressions (of known shapes) are no longer fragmented.",
"The shape pseudo-tokens also capture information about each number's magnitude; the intuition is that numeric tokens of similar magnitudes may require similar xbrl tags.",
"Figure 3 illustrates the use of [ num ] and [ shape ] pseudo-tokens.",
"Driven by the recent findings that pre-training language models on specialized domains is beneficial for downstream tasks (Alsentzer et al., 2019; Belt-4423",
"agy et al., 2019; Yang et al., 2020; Chalkidis et al., 2020b), we explore this direction in our task which is derived from the financial domain.",
"fin bert : We fine-tune fin bert (Yang et al., 2020), which is pre-trained on a financial corpus from sec documents, earnings call transcripts, and analyst reports.",
"11 The 30k subwords vocabulary of fin bert is built from scratch from its pre-training corpus.",
"Again, we utilize fin bert with and without our numeric pseudo-tokens, whose representations are learned during fine-tuning.",
"sec bert : We also release our own family of bert models.",
"Following the original setup of Devlin et al. (2019), we pre-trained bert from scratch on edgar corpus , a collection of financial documents released by Loukas et al. (2021).",
"The resulting model, called sec bert , has a newly created vocabulary of 30k subwords.",
"To further examine the impact of the proposed [ num ] and [ shape ] special tokens, we also pre-trained two additional bert variants, sec bert num and sec bert shape , on the same corpus, having replaced all numbers by [ num ] or [ shape ] pseudo-tokens, respectively.",
"In this case, the representations of the pseudo-tokens are learned during pre-training and they are updated during fine-tuning.",
"Table 4 reports micro-averaged precision, recall, and F 1 on development and test data.",
"As with Table 3, a lr layer is used on top of each embedding to predict the correct label, unless specified otherwise.",
"11 We use the finbert finvocab uncased version from https: // github.com / yya518 / FinBERT.",
"Focusing on the second zone, we observe that the [ num ] pseudo-token improves bert 's results, as expected, since it does not allow numeric expressions to be fragmented.",
"The results of bert + [ num ] are now comparable to those of bert + crf .",
"Performance improves further when utilizing the shape pseudo-tokens ( bert + [ shape ] ), yielding 79.4 -F 1 and showing that information about each number's magnitude is valuable in xbrl tagging.",
"Interestingly, fin bert (3rd zone) performs worse than bert despite its pre-training on financial data.",
"Similarly to bert , this can be attributed to the fragmentation of numbers (2.5 subwords per gold tag span).",
"Again, the proposed pseudo-tokens ( [ num ] , [ shape ] ) alleviate this problem and allow fin bert to leverage its in-domain pre-training in order to finally surpass the corresponding bert variants, achieving an 80.1 -F 1 test score.",
"Our new model, sec bert (last zone), which is pre-trained on sec reports, performs better than the existing bert and fin bert models, when no numeric pseudo-tokens are used.",
"However, sec bert is still worse than bert with numeric pseudo-tokens (75.7 vs. 78.3 and 79.4 test -F 1 ), su ering from number fragmentation (2.4 subwords per gold tag span).",
"sec bert (without pseudo-tokens) also performs worse than the bilstm with word embeddings (75.7 vs. 77.3 -F 1 , cf. Table 3).",
"However, when the proposed pseudo-tokens are used, sec bert num and sec bert shape achieve the best overall performance, boosting the test -F 1 to 80.4 and 82.1, respectively.",
"This indicates that learning to handle numeric expressions during model pretraining is a better strategy than trying to acquire this knowledge only during fine-tuning.",
"An alternative way to bypass word fragmentation is to use subword pooling for each word.",
"cs et al. (2021) found that for ner tasks, it is better to use the first subword only, i.e., predict the label of an entire word from the contextualized embedding of its first subword only; they compared to several other methods, such as using only the last subword of each word, or combining the contextualized embeddings of all subwords with a self-attention mechanism.",
"Given this finding, we conducted an ablation study and compare",
"(i) our best model ( sec bert ) with first subword pooling (denoted sec bert first ) to",
"(ii) sec bert with our special tokens ( sec bert num , sec bert shape ), which avoid segmenting numeric tokens.",
"Table 5 shows that, in xbrl tagging, using the proposed special tokens is comparable ( sec bert num ) or better ( sec bert shape ) than performing first pooling ( sec bert first ).",
"It might be worth trying other pooling strategies as well, like last -pooling or subword self-attention pooling.",
"It's worth noting, however, that the latter will increase the training and inference times.",
"To further investigate the e ectiveness of our pseudo-tokens, we incorporated them in the bil stm operating on subword embeddings (3rd model of Table 3).",
"Again, we replace each number by a single [ num ] pseudo-token or one of 214 [ shape ] pseudo-tokens, for the two approaches, respectively.",
"These replacements also happen when pretraining word 2 vec subword embeddings; hence, an embedding is obtained for each pseudo-token.",
"Table 6 shows that bilstm num outperforms the bilstm subword model.",
"bilstm shape further improves performance and is the best bilstm subword model overall, surpassing the subword bilstm with crf , which was the best subword bilstm model in Table 3. These results further support our hypothesis that the [ num ] and [ shape ] pseudo-tokens help subword models successfully generalize over numeric expressions, with [ shape ] being the best of the two approaches, while also avoiding the over-fragmentation of numbers.",
"Since xbrl tagging is derived from a real-world need, it is crucial to analyze the model's performance in a business use case.",
"After consulting with experts of the financial domain, we concluded that one practical use case would be to use an xbrl tagger as a recommendation engine that would propose the k most probable xbrl tags for a specific token selected by the user.",
"The idea is that an expert (e.g., accountant, auditor) knows beforehand the token(s) that should be annotated and the tagger would assist by helping identify the appropriate tags more quickly.",
"Instead of having to select from several hundreds of xbrl tags, the expert would only have to inspect a short list of k proposed tags.",
"We evaluate our best model, sec bert shape , in this use case using Hits@ k .",
"We use the model to return the k most probable xbrl tags for each token that needs to be annotated, now assuming that 4425 the tokens to be annotated are known.",
"If the correct tag is among the top k , we increase the number of hits by one.",
"Finally, we divide by the number of tokens to be annotated.",
"Figure 4 shows the results for di erent values of k .",
"The curve is steep for k = 1 to 5 and saturates as k approaches 10, where Hits@ k is nearly perfect (99.4%).",
"In practice, this means that a user would have to inspect 10 recommended xbrl tags instead of hundreds for each token to be annotated; and in most cases, the correct tag would be among the top 5 recommended ones.",
"We also performed an exploratory data and error analysis to unveil the peculiarities of f i ner -139, extract new insights about it, and discover the limitations of our best model.",
"Specifically, we manually inspected the errors of sec bert shape in under-performing classes (where F 1 < 50%) and identified three main sources of errors.",
"Specialized terminology: In this type of errors, the model is able to understand the general financial semantics, but does not fully comprehend highly technical details.",
"For example, Operating Lease Expense amounts are sometimes missclassified as Lease And Rental Expense , i.e., the model manages to predict that these amounts are about expenses in general, but fails to identify the specific details that distinguish operating lease expenses from lease and rental expenses.",
"Similarly, Payments to Acquire Businesses (Net of Cash Acquired) amounts are mostly misclassified as Payments to Acquire Businesses (Gross) .",
"In this case, the model understands the notion of business acquisition, but fails to di erentiate between net and gross payments.",
"Financial dates: Another interesting error type is the misclassification of financial dates.",
"For example, tokens of the class Debt Instrument Maturity Date are mostly missclassified as not belonging to any entity at all ( O' tag).",
"Given the previous type of errors, one would expect the model to miss-classify these tokens as a di erent type of financial date, but this is not the case here.",
"We suspect that errors of this type may be due to annotation inconsistencies by the financial experts.",
"Annotation inconsistencies: Even though the gold xbrl tags of f i ner -139 come from professional auditors, as required by the Securities & Exchange Commission ( sec ) legislation, there are still some discrepancies.",
"We provide an illustrative Figure 5: A manually inspected sentence from f i ner 139 showing some inconsistencies in the gold xbrl tags of the auditors.",
"example in Figure 5.",
"We believe that such inconsistencies are inevitable to occur and they are a part of the real-world nature of the problem.",
"We hope that this analysis inspires future work on xbrl tagging.",
"For example, the specialized terminology and financial date errors may be alleviated by adopting hierarchical classifiers (Chalkidis et al., 2020a; Manginas et al., 2020), which would first detect entities in coarse classes (e.g., expenses, dates) and would then try to classify the identified entities into finer classes (e.g., lease vs. rent expenses, instrument maturity dates vs. other types of dates).",
"It would also be interesting to train classifiers towards detecting wrong (or missing) gold annotations, in order to help in quality assurance checks of xbrl -tagged documents.",
"We introduced a new real-word nlp task from the financial domain, xbrl tagging, required by regulatory commissions worldwide.",
"We released f i ner 139, a dataset of 1.1M sentences with xbrl tags.",
"Unlike typical entity extraction tasks, f i ner -139 uses a much larger label set (139 tags), most tokens to be tagged are numeric, and the correct tag depends mostly on context rather than the tagged token.",
"We experimented with several neural classifiers, showing that a bilstm outperforms bert due to the excessive numeric token fragmentation of the latter.",
"We proposed two simple and e ective solutions that use special tokens to generalize over the shapes and magnitudes of numeric expressions.",
"We also experimented with fin bert , an existing 4426 bert model for the financial domain, which also benefits from our special tokens.",
"Finally, we pretrained and released our own domain-specific bert model, sec bert , both with and without the special tokens, which achieves the best overall results with the special tokens, without costly crf layers.",
"In future work, one could hire experts to reannotate a subset of the dataset to measure human performance against the gold tags.",
"Future work could also consider less frequent xbrl tags (few-and zero-shot learning) and exploit the hierarchical dependencies of xbrl tags, possibly with hierarchical classifiers, building upon our error analysis."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"abstain",
"method"
] |
[
"Claims are the central component of an argument.",
"Detecting claims across different domains or data sets can often be challenging due to their varying conceptualization.",
"We propose to alleviate this problem by fine tuning a language model using a Reddit corpus of 5.5 million opinionated claims.",
"These claims are self-labeled by their authors using the internet acronyms IMO/IMHO (in my (humble) opinion).",
"Empirical results show that using this approach improves the state of art performance across four benchmark argumentation data sets by an average of 4 absolute F1 points in claim detection.",
"As these data sets include diverse domains such as social media and student essays this improvement demonstrates the robustness of fine-tuning on this novel corpus.",
"Toulmin's influential work on argumentation (2003) introduced a claim as an assertion that deserves our attention .",
"More recent work describes a claim as a statement that is in dispute and that we are trying to support with reasons (Govier, 2010).",
"While some traits of claims are defined by their context, such as that claims usually need some support to make up a 'complete' argument (e.g., premises, evidence, or justifications), the exact definition of a claim may vary depending on the domain, register, or task.",
"Daxenberger et al. (2017) try to solve the problem of claim conceptualization by training models across one data set and testing on others, but their cross-domain claim detection experiments mostly led to decreased results over in-domain experiments.",
"To demonstrate that some properties of claims are shared across domains, we create a diverse and rich corpus mined from Reddit and evaluate on held out datasets from different sources.",
"We use Universal Language Model Fine-Tuning (ULM-FiT) (Howard and Ruder, 2018), which pre-trains a language model (LM) on a large general-domain corpus and fine-tunes it on our Reddit corpus before training a final classifier to identify claims on various data sets.",
"We make the following contributions: We release a dataset of 5.5 million opinionated claims from Reddit, 1 which we hope will be useful for computational argumentation.",
"We show transfer learning helps in the detection of claims with varying definitions and conceptualizations across data sets from diverse domains such as social media and student essays.",
"Empirical results show that using the Reddit corpus for language model fine-tuning improves the state-of-the-art performance across four benchmark argumentation data sets by an average of 4 absolute F1 points in claim detection.",
"Argumentation mining (AM) is a research field within the growing area of computational argumentation.",
"The tasks pursued within this field are highly challenging and include segmenting argumentative and non-argumentative text units, parsing argument structures, and recognizing argumentative components such as claimsthe main focus of this work.",
"On the modeling side, Stab and Gurevych (2017) and Persing and Ng (2016) used pipeline approaches for AM, combining parts of the pipeline using integer linear programming (ILP).",
"Eger et al. (2017) proposed state-of-art sequence tagging neural end-to-end models for AM.",
"Schulz et al. (2018) used multi-task learning (MTL) to identify argumentative components, 1 https://bitbucket.org/tuhinch/imho-naacl2019 challenging assumptions that conceptualizations across AM data sets are divergent and that MTL is difficult for semantic or higher-level tasks.",
"Rosenthal and McKeown (2012) were among the first to conduct cross-domain experiments for claim detection.",
"However they focused on relatively similar data sets like blog articles from Live-Journal and Wikipedia discussions.",
"Al-Khatib et al. (2016), on the other hand, wanted to identify argumentative sentences through cross-domain experiments.",
"Their goal was, however, to improve argumentation mining via distant supervision rather than detecting differences in the notions of a claim.",
"Daxenberger et al. (2017) showed that while the divergent conceptualization of claims in different data sets is indeed harmful to cross-domain classification, there are shared properties on the lexical level as well as system config-urations that can help to overcome these gaps.",
"To this end they carried out experiments using models with engineered features and deep learning to identify claims in a cross-domain fashion.",
"Pre-trained language models have been recently used to achieve state-of-the-art results on a wide range of NLP tasks (e.g., sequence labeling and sentence classification).",
"Some of the recent works that have employed pre-trained language models include ULMFiT (Howard and Ruder, 2018), ELMo (Peters et al., 2018), GLoMo (Yang et al., 2018), BERT (Devlin et al., 2019) and OpenAI transformer (Radford et al., 2018).",
"While these models have demonstrated success on a variety of tasks, they have yet to be widely used in argumentation mining.",
"As the goal of our experiments is to develop models that generalize across domains, we collect a large, diverse dataset from social media and fine-tune and evaluate on held out data sets.",
"In order to obtain a data set representative of claims, we need a method of automatic data collection that introduces minimal linguistic bias.",
"We thus mine comments containing the acronyms IMO (in my opinion) or IMHO (in my hum-ble opinion) from the social media site Reddit.",
"IM(H)O is a commonly used acronym 2 with the 2 https://reddit.zendesk.com/hc/en-us/articles/205173295-What-do-all-these-acronyms-meanonly purpose of identifying one's own comment as a personal opinion.",
"We provide some examples 3 below: That's virtually the same as neglect right there IMHO .",
"IMO , Lakers are in big trouble next couple years To use these examples for pre-training, we need only to remove the acronym (and any resulting unnecessary punctuation).",
"We collect Reddit comments from December 2008 to August 2017 through the pushshift.io API, resulting in 5,569,962 comments.",
"We perform sentence and word tokenization using Spacy.",
"We then extract only the sentence containing IMO or IMHO and discarded the surrounding text.",
"We refer to the resulting collection of comments as the IMHO dataset.",
"The IMHO dataset contains no negative examples, only labeled opinions.",
"Furthermore, opinions in this dataset may be only a claim or both a claim and a premise.",
"As our goal is to identify claims, we thus consider four data sets from argumentation mining.",
"As argumentation appears in both monologue and dialogue data, we choose two datasets created from student essays and two from social media.",
"Peldszus and Stede (2016) created a corpus of German microtexts ( MT ) of controlled linguistic and rhetorical complexity.",
"Each document includes a single argument and does not exceed five argumentative components.",
"This corpus was translated to English, which we use for our experiments.",
"The persuasive essay ( PE ) corpus (Stab and Gurevych, 2017) includes 402 student essays.",
"The scheme comprises major claims, claims, and premises at the clause level.",
"This corpus has been used extensively in the argumentation mining community.",
"The corpus from Habernal and Gurevych (2017) includes user-generated web discourse ( WD ) such as blog posts, or user comments annotated with claims and premises as well as backings, rebuttals and refutations.",
"Finally, Hidey et al. (2017) propose a two-tiered annotation scheme to label claims and premises and their semantic types in an online persuasive forum ( CMV ) using a sample of 78 threads from the sub-reddit Change My View, with the long-term goal 3 Examples have been modified to protect user privacy Figure 1: Schematic of ULMFiT, showing three stages.",
"of understanding what makes a message persuasive.",
"As with Daxenberger et al. (2017), we model claim detection at the sentence level, as this is the only way to make all data sets compatible to each other.",
"Table 1 gives an overview of the data.",
"As the IMHO dataset is only self-labeled with claim data but does not contain non-claims, we need a method of incorporating this dataset into a claim detection model.",
"We thus use a language model fine-tuning approach, which requires only data similar to the task of interest.",
"The Universal Language Model Fine-Tuning method (ULMFiT) (Howard and Ruder, 2018) consists of the following steps:",
"a) General-domain LM pre-training",
"b) Task-specific LM fine-tuning and",
"c) Task-specific classifier fine-tuning.",
"In step",
"(a), the language model is trained on Wikitext-103 (Merity et al., 2017) consisting of 28,595 preprocessed Wikipedia articles and 103 million words capturing general properties of language.",
"Step",
"(b) fine-tunes the LM on task-specific data, as no matter how diverse the general-domain data used for pre-training is, the data of the target task will likely come from a different distribution.",
"In step",
"(c), a classifier is then trained on the target task, fine-tuning the pre-trained LM but with an additional layer for class prediction.",
"The models all use a stacked LSTM to represent each sentence.",
"For stages",
"(a) and",
"(b), the output of the LSTM is used to make a prediction of the next token and the parameters from stage",
"(a) are used to initialize stage",
"(b).",
"For stage",
"(c), the model is initialized with the same LSTM but with a new classifier layer given the output of the LSTM.",
"This process is illustrated in Figure 1.",
"We refer the reader to Howard and Ruder (2018) for further details.",
"In our work, we maintain steps",
"(a) and",
"(c) but modify step",
"(b) so that we fine-tune the language model on our IMHO dataset rather than the task-specific data.",
"The goal of ULMFiT is to allow training on small datasets of only a few hundred examples, but our experiments will show that fine-tuning the language model on opinionated claims improves over only task-specific LM fine-tuning.",
"Table 2 show the results on the four data sets.",
"We compare to two baselines.",
"The numbers in the CNN column are taken directly from the results of the deep learning experiments mentioned in the work of Daxenberger et al. (2017).",
"Their deep learning experiments consisted of 4 different models:",
"a) bidirectional LSTM",
"b) LSTM",
"c) CNN initialized with random word embeddings and",
"d) CNN initialized with word2vec.",
"In their experiments for MT and PE, a CNN initialized with random word embeddings gave the best results and for WD a CNN with word2vec gave the best results.",
"As CMV is a new data set we experimented with all four models and obtained the best result using a CNN with random initialization.",
"The Task-Specific LM Fine-Tuning column Metric CNN Task-Specific LM Fine-Tuning IMHO LM Fine-Tuning Claim Macro Claim Macro Claim Macro WD P 50.0 72.5 50.0 72.5 54.0 75.9 R 20.4 59.2 20.0 59.8 24.0 61.7 F 28.9 62.6 28.5 62.7 33.3 65.2 MT P 66.5 79.0 66.2 78.5 71.0 80.9 R 68.2 78.5 68.0 77.8 71.8 81.4 F 67.3 78.6 67.0 78.1 71.2 81.1 PE P 60.9 73.2 62.3 73.2 62.6 74.4 R 61.2 74.0 65.8 75.1 66.0 75.0 F 61.1 73.6 64.0 74.1 64.3 74.8 CMV P 54.0 65.1 55.0 68.0 55.7 69.5 R 53.0 62.5 59.0 65.0 60.0 65.3 F 53.5 63.8 57.0 66.4 57.8 67.3 Table 2: Table showing the results on four data sets.",
"contains the results obtained by fine-tuning the language model on each respective dataset while the IMHO LM Fine-Tuning column contains the results from fine-tuning the language model on IMHO.",
"As in previous work, we report both Claim F1 and Macro F1.",
"The experiments were carried out in a 10-fold cross-validation setup with fixed splits into training and test data and the F1 scores are averaged over each of the folds.",
"Each model was run 10 times to account for variance and the results reported in the table are an average of 10 runs.",
"We use the same hyper-parameters as Howard and Ruder (2018) except for a batch size of 32 for MT and 64 for the remaining data sets.",
"The learning rate for classifier fine-tuning is set to 0.0001.",
"We train our classifier for 5 epochs on each data set.",
"We obtain statistically significant results ( p < 0 . 05 using Chi Squared Test) over all CNN models trained only on the task-specific datasets.",
"We also find that for all models, IMHO LM Fine-Tuning even performs better than Task-Specific LM Fine-Tuning, and is significantly better for the MT and WD datasets (which both contain very few claims).",
"For the MT and WD datasets, Task-Specific LM Fine-Tuning actually performs worse than the CNN models.",
"To understand how using the IMHO dataset improved over the CNN and Task-Specific Fine-Tuning settings, we show examples that were incorrectly",
"incorrectly classified by the two baseline models but correctly classified by the IMHO Fine-Tuning.",
"We retrieve the most similar example in the IMHO dataset to these misclassified samples according to TF-IDF over unigrams and bigrams.",
"Table 3 presents the examples labeled by their dataset and the corresponding IMHO example.",
"We find that the IMHO dataset contains n-grams indicative of claims, e.g. can be very rewarding , should be taken off the market , and should intervene , demonstrating that the IMHO LM Fine-Tuning learns representations of claims based on discriminatory phrases.",
"In fact, the CMV example is almost an exact paraphrase of the IMHO example, differing only in the phrase anecdotal evidence compared to my anecdotal experience .",
"At the same time, we find that many of the topics in these datasets occur in the IMHO dataset as well, such as public schooling and licence fees , suggesting that the language model learns a bias towards topics as well.",
"While empirical results indicate that IMHO Fine-Tuning helps in claim detection, we also investigated whether the language model introduces any bias towards types of claims.",
"To this end, we also evaluated examples classified incorrectly by the model.",
"Table 4 shows sentences that are predicted to be opinionated claims by our model but are actually non-claims.",
"We note that a portion of these misclassified examples were premises used to back a claim which could be classified correctly given additional context.",
"For instance, the second example from the MT data set in the table backs Dataset Sentence MT If there must be rent increases , there should also be a cap to avoid nasty surprises MT Video games namely FIFA in my case , can fascinate young people for hours more intensively and emotionally than any sport in the world !"
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain"
] |
[
"We introduce ParaBLEU, a paraphrase representation learning model and evaluation metric for text generation.",
"Unlike previous approaches, ParaBLEU learns to understand paraphrasis using generative conditioning as a pretraining objective.",
"ParaBLEU correlates more strongly with human judgements than existing metrics, obtaining new state-of-the-art results on the 2017 WMT Metrics Shared Task.",
"We show that our model is robust to data scarcity, exceeding previous state-of-the-art performance using only 50% of the available training data and surpassing BLEU, ROUGE and METEOR with only 40 labelled examples.",
"Finally, we demonstrate that ParaBLEU can be used to conditionally generate novel paraphrases from a single demonstration, which we use to confirm our hypothesis that it learns abstract, generalized paraphrase representations.",
"Representing the relationship between two pieces of text, be it through a simple algorithm or a deep neural network, has a long history and diverse use-cases that include the evaluation of text generation models (Wiseman et al., 2017; Van Der Lee et al., 2019) and the clinical evaluation of human speech (Johnson et al., 2003; Weintraub et al., 2018).",
"One of the earliest examples of such a representation is the Levenshtein distance (Levenshtein, 1966), which describes the number of character-level edits required to transform one piece of text into another.",
"This metric now forms part of a wider fam-ily of edit-distance-based metrics that includes the word error rate (WER) and the translation error rate (TER) (Och, 2003).",
"Other algorithms, such as ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005) and the widely used BLEU metric (Pa-pineni et al., 2002), perform exact or approximate n -gram matching between the two texts.",
"texts, which benefits from a deep prior understanding of the semantic and syntactic symmetries of language (Novikova et al., 2017).",
"For example, pairs like she was no ordinary burglar and she was an ordinary burglar are close in edit-distance-space but semantically disparate.",
"The goal of an automatic text evaluation metric is typically to be a good proxy for human judgements, which is clearly task-dependent.",
"More recently, neural approaches have begun to close the gap between automatic and human judgements of semantic text similarity using Transformer-based language models such as BERT (Zhang et al., 2019a; Sellam et al., 2020).",
"They aim to leverage the transferable knowledge gained by the model during pretraining on large text corpora.",
"The relationship between two texts is similarly modelled, albeit implicitly, by sequence-to-sequence models such as BART (Lewis et al., 2019) and T5 (Raffel et al., 2019).",
"We consider paraphrase evaluation and paraphrase generation to be two instances of paraphrase representation learning .",
"Linguistically, a paraphrase is a restatement that preserves essential meaning, with arbitrary levels of literality, fidelity and completeness.",
"In practice, what qualifies as a good paraphrase is context-specific.",
"One motivation for considering paraphrase evaluation as a representation learning problem is the varied nature of paraphrase evaluation tasks, which may have an emphasis on semantic equivalence (e.g. PAWS (Zhang et al., 2019b) and MRPC (Dolan and Brockett, 2005)), logical entailment versus contradiction (e.g. MultiNLI (Williams et al., 2017) and SNLI (Bowman et al., 2015)), and the acceptability of the generated text (e.g. the WMT Metrics Shared Task (Bojar et al., 2017)).",
"Considering even broader applications such as clinical speech analysis further motivates learning generalized paraphrase representations.",
"In this paper, we introduce ParaBLEU, a paraphrase representation learning model that predicts a 4052 conditioning factor for sequence-to-sequence paraphrase generation as one of its pretraining objectives, inspired by style transfer in text-to-speech (Skerry-Ryan et al., 2018) and text generation systems (Yang et al., 2018; Lample et al., 2018).",
"ParaBLEU addresses the primary issue with neural paraphrase evaluation models to date: the selection of a sufficiently generalized pretraining objective that primes the model for strong performance on downstream paraphrase evaluation tasks when data is scarce.",
"Previous state-of-the-art neural models have either used a broad multi-task learning approach or eschewed additional pretraining altogether.",
"The former case may encourage the model to learn the biases of inferior or inappropriate metrics, while the latter leaves room for optimization.",
"Non-neural models, such as BLEU, TER, ROUGE and BERTScore (Zhang et al., 2019a), benefit from requiring no training data and thereby avoid do-main shift issues.",
"They cannot, however, learn to exploit task-specific nuances of what defines good' paraphrasing.",
"We evaluate ParaBLEU's ability to predict human judgements of paraphrases using the English subset of the 2017 WMT Metrics Shared Task.",
"A useful neural text similarity metric should be robust to data scarcity, so we assess performance as a function of the fine-tuning dataset size.",
"Finally, using the ParaBLEU pretraining model as a paraphrase generation system, we explore our hypothesis that the model reasons in high-level paraphrastic concepts rather than low-level edits through an explainability study, and demonstrate that ParaBLEU can operate as a conditional paraphrase generation model.",
"In this section, we describe and justify the set of inductive biases we build into ParaBLEU, along with a description of the model architecture and pretraining/fine-tuning strategy.",
"We consider a reference text x and a candidate text x .",
"We wish to learn a function f : f ( x, x ) y , where y RN is a singleor multi-dimensional paraphrase representation, which could be a scalar score.",
"Our approach begins by decomposing paraphrase representation learning into three overlapping factors:",
"Building a representation of high-level syntactic and semantic differences between x and x , contrasted with the low-level pseudo-syntactic/-semantic operations considered by edit-distance-based and n -gram based metrics.",
"2. Candidate acceptability judgement: Evaluating the grammaticality, coherence and naturalness of x in isolation.",
"Perplexity (Jelinek et al., 1977) with respect to a given language model is one proxy for this.",
"3. Semantic equivalence: Assessing whether x and x convey the same essential meaning precisely, as opposed to merely being semantically similar.",
"This is related to entailment classification tasks and, more broadly, the interaction between language and formal logic.",
"Using pretrained language models: All three factors require a general understanding of the semantic and syntactic structures of language, making transfer learning from powerful pretrained language models such as BERT (Devlin et al., 2018) appealing.",
"Non-local attention as bitext alignment: Factors (1) and (3) require performing context-aware matching' between x and x .",
"This is similar to the statistical method of bitext alignment (Tiedemann, 2011).",
"Attention mechanisms within a Transformer (Vaswani et al., 2017) are an obvious candidate for learnable context-aware matching, which has precedent in paraphrasing tasks and the next-sentence-prediction objective of the original BERT pretraining.",
"If the tokens of x and x are concatenated into one long input sequence, local attention mechanisms, such as those used in T5, may be suboptimal for longer text-pairs.",
"Bottlenecked conditional generation objective: A key insight is that a strong factor (1) representation z RM where h : h ( x, x ) z is one that can condition the sampling of x from x through some generative model g : g ( x | z ) x .",
"One trivial solution to this is h ( x, x ) = x .",
"To avoid this case, we introduce a bottleneck on z such that 4053 it is advantageous for the model to learn to represent high-level abstractions, which are cheaper than copying x through the bottleneck.",
"It is likely advantageous to use a pretrained sequence-to-sequence language model, which can already reason in linguistic concepts.",
"Entailment classification objective: Factor (3) is similar to the classification of whether x logically entails x .",
"There are a number of sentence-pair datasets with entailment labels that could be used to construct this loss; see Table 4. 2.2 ParaBLEU Inspired by style transfer in text-to-speech (Skerry-Ryan et al., 2018) and text generation systems (Yang et al., 2018; Lample et al., 2018), we propose the architecture shown in Figure 1. The grey box indicates the Transformer encoder we wish to pretrain, which we refer to as the edit encoder'.",
"Masked language modelling objective: Factor (2) can be addressed by an MLM objective, which alone is sufficient for a neural network to learn a language model (Devlin et al., 2018).",
"Performing masked language modelling on a reference-candidate pair also encourages the network to use x to help unmask x and vice versa, strengthening the alignment bias useful for factors (1) and (2).",
"Factorization of the task leads to three complementary objectives: a cross-entropy masked language modelling loss LMLM (Devlin et al., 2018), a cross-entropy autoregressive causal language modelling loss LAR (Radford et al., 2018) and a binary cross-entropy entailment classification loss LCLS .",
"An additional sequence-to-sequence Transformer model is used during pretraining to provide a learning signal.",
"The proposed bottleneck lies within the feedforward network module (see Figure 1), implemented by restricting the hidden dimension to 64 (down from 768 or 1 , 024 in the cases of ParaBLEU base and ParaBLEU large respectively) before projecting back up to the dimension of the BART decoder.",
"The full pretraining loss is given by: L pre := LAR + LMLM + LCLS , (1) where and are tunable hyperparameters.",
"the sequence-to-sequence model is discarded and the edit encoder is fine-tuned using a linear projection on top of the pooled output, projecting the pooled output down to a single dimension that constitutes the predicted score.",
"Throughout this work, our pooling layers simply take the beginning-of-sequence token.",
"An MSE loss LMSE is used during fine-tuning.",
"Our architecture places restrictions on valid combinations of pretrained models.",
"We found in practice that using an encoder-only pretrained language model to initialize the edit encoder, and a sequence-to-sequence pretrained language model to initialize the sequence-to-sequence model, works best.",
"This is likely because encoder-only models are encouraged to encode strong representations at the final layer, and these representations have already been directly pretrained with an MLM objective.",
"For technical ease we require that the models use the same tokenizer, and that the pretrained checkpoints are available through the HuggingFace transformers library (Wolf et al., 2019).",
"In this paper, we consider the combination RoBERTa (Liu et al., 2019) + BART, but we note that both multilingual (XLM-R (Conneau et al., 2019) + mBART (Liu et al., 2020)) and long (Longformer + Longformer-Encoder-Decoder (LED) (Beltagy et al., 2020)) combinations exist.",
"We consider both base and large variants, which correspond to RoBERTa base and RoBERTa large .",
"In both cases, we use a BART base checkpoint.",
"Evaluation metrics BERTScore (Zhang et al., 2019a), a non-learned neural metric, uses a matching algorithm on top of contextualized neural word embeddings, similar to n -gram matching approaches.",
"MoverScore (Zhao et al., 2019) is similar to BERTScore but uses an optimal transport algorithm.",
"BLEU, ROUGE, METEOR and chrF++ (Popovic, 2017) are widely used n -gram-based methods, working at the word, subword or character level.",
"TER is an edit-distance-based metric, similar to WER.",
"BLEURT (Sellam et al., 2020) is a neural automatic evaluation metric for text generation.",
"Starting from a pretrained BERT model, it is further pretrained to predict a number of pre-existing metrics, such as BLEU, ROUGE and BERTScore.",
"ParaBLEU, by contrast, does not use pre-existing metrics as training objectives, instead using generative conditioning as a more 4054 Figure 1:",
"general signal for paraphrase representation learning.",
"COMET (Rei et al., 2020) is a framework for training multilingual machine translation (MT) evaluation models where parameters in the regression or ranking layers are optimized using human judgements scores with either an MSE objective or triplet objective respectively.",
"PRISM (Thomp-son and Post, 2020) similar to ParaBLEU formulates evaluation as a paraphrasing task.",
"However it treats paraphrasing as zero-shot translation using a multilingual neural MT model as a paraphraser.",
"BARTScore (Yuan et al., 2021) calculates the log-likelihood of the candidate text conditioned upon the reference text from BART (Lewis et al., 2019), a pretrained sequence-to-sequence model.",
"Paraphrase generation There is a wealth of recent literature on controllable paraphrase generation and linguistic style transfer (Yang et al., 2018; Zhao et al., 2018; Jin et al., 2020), which aims to extract the style of a piece of text and map it onto another piece of text without changing its semantic meaning.",
"T5 leverages a huge text corpus as pretraining for conditional generation using com-mands' encoded as text, which includes paraphrastic tasks such as summarization.",
"FSET (Kazemne-jad et al., 2020) is a retrieval-based paraphrase generation system in which a sentence z is paraphrased by first locating a similar reference sentence from a large bank of reference/candidate pairs, then extracting and replaying similar low-level edits on z .",
"Common to ParaBLEU and FSET is the use of a Transformer for paraphrase style transfer, with differing architectural details.",
"However, FSET is designed to transpose low-level edits and so requires lexically similar examples; whereas ParaBLEU is explicitly designed to learn high-level, reference-invariant paraphrase representations using a factorized objective.",
"The musical style Transformer autoencoder (Choi et al., 2020) uses a similar Transformer-based style transfer architecture to conditionally generate new music in controllable styles.",
"Other examples in text-to-speech systems perform style transfer by encoding the prosody of a source sentence into a bottlenecked reference embedding (Skerry-Ryan et al., 2018) or disentangled style tokens (Wang et al., 2018b).",
"STRAP (Kr-ishna et al., 2020) generates paraphrases in controllable styles by mixing and matching multiple style-specific fine-tuned GPT-2 models.",
"REAP (Goyal and Durrett, 2020) uses a Transformer to generate syntactically diverse paraphrases by including an additional position embedding representing the syntactic tree.",
"DNPG (Li et al., 2019) is a paraphrase generation system that uses a cascade of Transformer encoders/decoders to control whether paraphrasing is sentential/phrasal.",
"In this section, we describe the pretraining and fine-tuning datasets we use in our studies.",
"The WMT Metrics Shared Task is an annual benchmark for automated evaluation metrics for translation systems, where the goal is to predict average human ratings comparing the machine-translated candidate x with human-translated reference x , both of which have been translated from the same source sentence.",
"We use an identical setup to (Sellam et al., 2020) and (Zhang et al., 2019a), where we use the subset of data for which the candidate and reference are in English, which we will refer to as the to-English subset.",
"The source, which is unused, can be in any non-English language, the set of which varies from year-to-year.",
"We produce results for the WMT Metrics Shared Task 2017 (WMT17), training on the to-English subsets of WMT15 and WMT16.",
"The test set contains 4 , 132 examples and the training set 5 , 360 examples.",
"The distributions of example length in tokens is shown in Figure 2. The WMT data is prepared using the WMT preparation code in the BLEURT repository 1 .",
"The decision to test only on the WMT17 dataset is deliberate.",
"Results from previous state-of-the-art papers (Sellam et al., 2020; Zhang et al., 2019a) demonstrate issues with WMT18 and later datasets: the noise in the test set is high and differentiation between different methods becomes so suppressed for later years that the benchmark becomes uninteresting.",
"This issue is noted in both the BLEURT paper and by the organizers of the 2018 WMT Metrics Shared Task 2 .",
"We report the agreement between the metric and the human scores using two related correlation coefficients: absolute Kendall | | and absolute Pearson | r | , the latter of which was the official metric of the 2017 task.",
"In our summary results in the main paper, we average these metrics across all source languages but not over reference/candidate language.",
"Full results are provided in Appendix E. 3.2 ParaCorpus In addition to our design choices, we also encourage a robust and generalizable pretraining by using a dataset that covers a variety of styles and lengths.",
"We collate a number of paraphrase datasets to create a single pretraining dataset we call ParaCorpus.",
"The composition of the dataset is shown in Table 4, with a total of 5 .",
"1 m examples.",
"All examples have reference and candidate texts and around one third additionally have binary entailment labels.",
"Where the source dataset included three-way labels entailment'/contradiction'/neutral', entailment' was mapped to 1 and the others to 0 .",
"A subset of ParaNMT-50M (Wieting and Gimpel, 2017), which includes noisier, speech-like examples, was included to add additional stylistic diversity to the dataset, and to increase the population of the dataset with combined token lengths above 128 , which we hypothesize will make the model more robust to the longer examples seen in the WMT datasets.",
"Token lengths are shown in Figure 2. 4 Experiments In this section, we present results on WMT17, benchmarked against the current state-of-the-art approach, along with widely used neural, n -gram and edit-distance-based metrics.",
"We study ParaBLEU performance as a function of number of pretraining 1 https://github.com/google-research/ bleurt 2 https://www.statmt.org/wmt18/ metrics-task.html 4056 Table 1: Summary results for WMT17.",
"steps and the size of the fine-tuning dataset.",
"Finally, we perform ablations to test the impact of the inductive biases and resultant architectural decisions described in Section 2. We report results for both ParaBLEU base , based on RoBERTa base ( 12 layers, 768 hidden units, 12 heads), and our default model ParaBLEU large , based on RoBERTa large ( 24 layers, 1 , 024 hidden units, 16 heads).",
"Both models are trained near-identically for 4 epochs on ParaCorpus.",
"Further pretraining details can be found in Appendix A. For fine-tuning, we use a batch size of 32 , a learning rate of 1 e5 and train for 40 k steps, with a validation set size of 10% (unless otherwise stated).",
"No reference texts are shared between the train and validation sets, following (Sellam et al., 2020).",
"Pretraining ParaBLEU large takes 10 h on a 16 A100 GPU machine.",
"Fine-tuning takes 8 h on a single A100 GPU machine.",
"ParaBLEU results on WMT17 are given in Table 1, along with a number of baselines described in Section 2.3).",
"ParaBLEU large achieves new state-of-the-art results on WMT17, exceeding the previous state-of-the-art approach, BLEURT, on both correlation metrics.",
"We note that non-neural metrics perform the worst, of which the character-level n -gram-matching algorithm chrF++ performs the best.",
"Non-learned neural metrics (BERTScore and Mover-Figure 3: Performance of ParaBLEU large on WMT17 as a function of number of pretraining steps (top) and the fine-tuning dataset size (bottom).",
"Note that the Pearson r results (blue) use the left y -axis, whereas Kendall (orange) uses the right y -axis.",
"Score) tend to perform better, and learned neural metrics (BLEURT and ParaBLEU) perform the best.",
"BLEU, the most widely used metric, has the poorest correlation with human judgements.",
"This is consistent with results seen previously in the literature (Zhang et al., 2019a; Sellam et al., 2020).",
"The significant drop in performance from ParaBLEU large to ParaBLEU base highlights the benefit of larger, more expressive pretrained language models.",
"Figure 3 probes performance as a function of number of pretraining steps and the size of the fine-tuning dataset for ParaBLEU large .",
"As expected, pretraining for longer increases downstream task performance.",
"However, we note that 40 k steps, approximately 4 epochs of ParaCorpus, does not yet reach diminishing returns on WMT17 performance.",
"We therefore recommend pretraining for significantly longer.",
"Both BERT and RoBERTa are pretrained for 40 epochs (Liu et al., 2019; Lan et al., 2019); the T5 authors ablate their dataset size at a fixed number of steps and conclude that performance does not significantly degrade up to and including 64 epochs (Raffel et al., 2019); conversely, 4057 Table 2: Ablation results on WMT17.",
"the BLEURT authors see diminishing returns on downstream task performance after 2 pretraining",
"epochs (Sellam et al., 2020).",
"For the fine-tuning dataset size study, we consistently use a validation set size of 25% to facilitate the small-data results.",
"Despite the training set (the to-English subsets of WMT15 and WMT16) forming a relatively small dataset, ParaBLEU large trained on 50% of the available data ( 2 , 010 training examples, 670 validation examples) still beats the previous state-of-the-art, BLEURT, yielding a Pearson correlation of 0 .",
"823 .",
"The impact of reducing the train size from 100% ( 4 , 020 training examples, 1 , 340 validation examples) to 25% ( 1 , 005 training examples, 335 validation examples) has a relatively small effect on performance, reducing Pearson r from 0 .",
"832 to 0 .",
"795 .",
"With a dataset size of only 1% ( 40 training examples, 14 validation examples), ParaBLEU large achieves a Pearson r of 0 .",
"571 , still correlating significantly more strongly with human judgements than BLEU, TER, ROUGE, METEOR and MoverScore.",
"We attribute this to the suitability of the generalized pretraining objective for priming the model for paraphrase evaluation tasks.",
"To more directly test the hypotheses in Section 2.1, we perform ablations in which we remove each component of the factorized objective in turn.",
"The results of this are shown in Table 2. Each part of the objective is associated with an increase in downstream task performance.",
"The most significant degradation comes from removing the MLM loss.",
"Possible reasons for this include: the MLM loss' contribution to candidate acceptability judgement are crucial; the MLM loss acts as a regularizer, encouraging the edit encoder to represent paraphrases in linguistic concepts rather than low-level edits; and the MLM loss further encourages bitext alignment behaviour, as described in Section 2.1.",
"As our final study, we exploit the generative nature of the pretraining architecture to test our claim that the edit encoder reasons in high-level paraphrastic concepts rather than low-level edits.",
"To do this, we diverge from the pretraining setup, in which the same reference text is passed to both the edit encoder and the sequence-to-sequence model, by passing a different, unseen reference to the sequence-to-sequence model.",
"Akin to (Brown et al., 2020; Gao et al., 2020), the hope is that the demon-stration paraphrase' acts as a conditioning factor for paraphrasing the unseen sentence in a similar way.",
"If the model is reasoning in low-level edits or otherwise cheating', we expect to see: Thematic/word leakage from the encoder candidate to the generated candidate, caused by the candidate being autoencoded.",
"This is the undesirable behaviour we sought to address using a bottleneck.",
"Ungrammatical or otherwise unacceptable output with made-up words and/or bad word order, caused by the encoding of low-level edits scrambling the generator reference tokens.",
"If the model is reasoning in high-level paraphrastic concepts, we expect to see: Consistently grammatical, acceptable output.",
"The flavour of the paraphrase mirroring the conditioning, e.g. the altering of a linguistic style, mood or tense.",
"We generate text using beam-search (Medress et al., 1977).",
"We sample references at random from the MRPC dataset.",
"The demonstration candidate is a hand-crafted paraphrase of the demonstration reference that embodies a pre-specified paraphrase type.",
"We report the predicted entailment score of the demonstration reference and candidate, along with the candidate generated by the model.",
"A summarized, random subset of generation results is shown in Appendix C. We include two sets of results for each paraphrastic type (e.g. 4058 negative'): one where the demonstration refer-ence/candidate differ in this concept, and one where both embody the concept.",
"Since we wish to encode the difference between the demonstration reference/candidate texts, the desired behaviour when the demonstration pair is identical is no change.",
"If this is not the case, it is likely that the edit encoder is just autoencoding the candidate using high-level linguistic concepts, similar to linguistic style transfer.",
"Further randomly chosen examples are given in Appendix F. The results present a strong case that the encoder is representing high-level paraphrastic concepts.",
"It is able to successfully identify changes in mood, style and tense between the demonstration reference and candidate, and transpose them onto the unseen reference to make a largely grammatical and appropriately paraphrased sentence.",
"We do not see significant leakage of concepts, words or styles between the demonstration candidate and the generated candidate, instead the expected transfer of paraphrase style.",
"Limitations of this work include the relatively small set of baselines used; there is an ever-increasing number of text similarity metrics and so only a subset is presented here.",
"As demonstrated in Section 4.1, it seems likely that the performance is currently limited by pretraining time and so we have not yet probed the ceiling performance of this method.",
"Swapping out the edit encoder and encoder-decoder for current state-of-the-art models like DeBERTa (He et al., 2020) may offer further performance boosts.",
"Expanding this work to predicting on datasets beyond the 2017 WMT Metrics Shared Task will probe the generalizability of the techniques in this paper.",
"The application of ParaBLEU for paraphrase generation has not been quantitatively explored.",
"Although the results presented in Section 5 are chosen at random, the analysis is qualitative.",
"More rigorous methods for evaluating the quality of the paraphrase generation are left for future work.",
"In this paper, we introduced ParaBLEU, a paraphrase representation learning model and associated paraphrase evaluation metric.",
"We demonstrated that the metric yields state-of-the-art correlation with human paraphrase judgements and is robust to data scarcity.",
"We motivated its pretraining strategy through a set of inductive biases, which we tested through ablation studies.",
"Finally, we reframed the pretraining as a one-shot paraphrase generation model and gathered evidence that ParaBLEU represents meaningful paraphrastic information."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain"
] |
[
"The use of crowdworkers in NLP research is growing rapidly, in tandem with the exponential increase in research production in machine learning and AI.",
"Ethical discussion regarding the use of crowdworkers within the NLP research community is typically confined in scope to issues related to labor conditions such as fair pay.",
"We draw attention to the lack of ethical considerations related to the various tasks performed by workers, including labeling, evaluation, and production.",
"We find that the Final Rule , the common ethical framework used by researchers, did not anticipate the use of online crowdsourcing platforms for data collection, resulting in gaps between the spirit and practice of human-subjects ethics in NLP research.",
"We enumerate common scenarios where crowdworkers performing NLP tasks are at risk of harm.",
"We thus recommend that researchers evaluate these risks by considering the three ethical principles set up by the Belmont Report.",
"We also clarify some common misconceptions regarding the Institutional Review Board (IRB) application.",
"We hope this paper will serve to reopen the discussion within our community regarding the ethical use of crowdworkers.",
"The information age brought with it the internet, big data, smartphones, AI, and along with these, a plethora of complex ethical challenges.",
"As a result, there is growing concern and discussion on ethics within the research community at large, including the NLP community.",
"This is manifested in new ethics-focused workshops, ethics conference panels and relevant updates to peer review forms.",
"While ethics in NLP has multiple aspects, most recent attention focuses on pressing issues related to the societal impact of NLP.",
"These include discrimination, exclusion, over-generalization, bias, Corresponding author: [email protected] and fairness (Hovy and Spruit, 2016; Leidner and Plachouras, 2017).",
"Other works are concerned with the ethical implications of NLP shared tasks (Parra Escartn et al., 2017), and introducing ethics into the NLP curriculum (Bender et al., 2020).",
"A substantial amount of NLP research now takes advantage of crowdworkers workers on crowdsourcing platforms such as Amazon Mechanical Turk (known also as AMT or MTurk), Figure Eight 1 , Appen, Upwork, Prolific, Hybrid, Tencent Questionnaire, and Baidu Zhongbao, as well as internal crowdsourcing platforms in companies such as Microsoft and Apple.",
"Workers are recruited to label, evaluate, and produce data.",
"In the pre-internet era, such tasks (e.g. part-of-speech (POS) tagging) were done by hiring expert annotators or linguistics students.",
"However, these are now mostly replaced by crowdworkers due to lower costs, convenience, speed, and scalability.",
"Overall, the general consensus in the literature is that as long as the pay to the crowdworkers is fair (minimum hourly wage or above), there are no further ethical concerns, and there is no need for approval by an Institutional Review Board 2 (with some exceptions).",
"For example, Hovy and Spruit (2016) mention that [w]ork on existing corpora is unlikely to raise any flags that would require an IRB approval, with a footnote that there are a few exceptions.",
"Fort et al. (2011) mention that only [a] small number of universities have insisted on institutional review board approval for MTurk experiments.",
"As another example, NLP students are being taught that paid labeling does not require IRB approval since [i]t's not an experiment 1 Previously CrowdFlower; acquired by Appen in 2019. 2 Institutional Review Boards (IRBs) are university-level, multi-stakeholder committees that review the methods proposed for research involving human subjects to ensure that they conform to ethical principles. IRBs are also known by various other names, such as Research Ethics Boards (REBs) and Research Ethics Committees (RECs). Non-academic organizations may employ similar committees. Accepted Papers Papers Using Payment IRB Year ACL EMNLP NAACL All Crowdsourcing Mentioned Mentioned 2015 318 312 186 816 59 (7%) 4 (7%) 0 2016 328 264 182 774 82 (11%) 15 (18%) 0 2017 302 323 625 57 (9%) 12 (21%) 3 2018 381 549 332 1262 136 (11%) 17 (13%) 1 2019 660 683 423 1766 189 (11%) 32 (17%) 5 2020 779 754 1533 180 (12%) 42 (23%) 5 Total 2768 2885 1123 6776 703 (10%) 122 (17%) 14 Table 1: Papers using crowdsourced tasks in top NLP conferences, 20152020. The columns show, from left to right: conference year; number of accepted papers at ACL, EMNLP, and NAACL; total number of accepted papers; number of accepted papers using crowdsourced tasks (percentage of papers using crowdsourced tasks); number of papers using crowdsourced tasks that mention payment (percentage of papers using crowdsourced tasks that mention payment); number of papers using crowdsourced tasks that mention IRB review or exemption. with human subjects (Carnegie-Mellon University, Language Technologies Institute, 2020).",
"Indeed, NLP papers that involve crowdsourced work rarely mention a review by an ethics board.",
"In this work, we wish to revisit the ethical issues of crowdsourcing in the NLP context, highlight several issues of concern, and suggest ways forward.",
"From our survey of top NLP conferences, we find that crowdsourcing tasks are growing rapidly within the community.",
"We therefore establish a common understanding of research ethics and how it relates to crowdsourcing.",
"We demonstrate that the existing ethical framework is often inadequate and does not seek to protect crowdsourced workers.",
"We then dispel common misunderstandings regarding the IRB process that NLP researchers might harbor.",
"And finally, we outline how to apply ethical values as guidelines to minimize potential harms, and conclude with recommendations.",
"To get a sense of the extent and growth of crowdsourced tasks within the NLP research community, we analyzed the proceedings of three top NLP conferences: ACL, EMNLP, and NAACL (also known as NAACL-HLT) 3 .",
"We scanned the annual proceedings of these conferences in the six years from 2015 to 2020, looking for papers that mention direct employment of crowdsourced workers.",
"All together, 3 ACL is the Annual Meeting of the Association for Computational Linguistics, EMNLP is the Conference on Empirical Methods in Natural Language Processing, and NAACL is the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"6776 papers were accepted for publication in these 16 conferences 4 .",
"In total, we identified 703 papers that use crowdworkers as part of their research.",
"5 The results are summarized in Table 1. Renumeration For each paper that uses crowdsourced labor, we checked whether the authors discuss payment or labor-related issues.",
"Out of the 703 papers, 122 (17%) discuss payment, either by detailing the amount paid per task, the worker's hourly wages, or declaring that wages were paid ethically.",
"While in some cases researchers emphasize fair payment (e.g., Nangia et al. (2020) ensured a pay rate of at least $15/hour), many other papers are more concerned about the cost of dataset acquisition, and thus mention the cost per task or the total dataset cost, but not the hourly compensation.",
"IRB Review Finally, we also checked whether authors mention a review by an IRB (or equivalent body) for their research.",
"We found very few papers that mention an IRB approval or exemption a total of 14 papers which make up only 2% of the works that use crowdsourcing.",
"Growth We see that research papers using crowdsourced tasks have made up a relatively constant 11-12% of all research papers in the last three years.",
"As research production grows exponentially, we expect a corresponding increase in the number of crowdsourced tasks.",
"6 4 NAACL was not held in 2017 and 2020; it is skipped every three years.",
"To understand the many nuanced ways in which researchers presently use crowdworkers in NLP tasks, we examined the tasks performed in each of the papers that use crowdsourcing.",
"We found that NLP crowdsourced tasks generally fall into one of three categories, which we designate as labeling , evaluation , and production .",
"We found that the categories account for 34%, 43%, and 23% of crowdsourcing tasks respectively.",
"The results are summarized in Table 2, which also lists common action verbs that researchers use to describe the work performed by crowdworkers.",
"Labeling entails the processing of existing data by the crowdworker and then the selection or composition of a label or labels for that data 7 .",
"Labeling tasks augment the data with human-supplied labels.",
"The augmented data are often used for training machine learning models.",
"We further divide labeling tasks into two: objective and subjective labeling.",
"In objective labeling, the desired label is factual, and does not depend on the worker.",
"Classical examples are the tagging of sentences with named entities and part-of-speech (POS) labeling, and text transcription.",
"In contrast, subjective labeling comprises of tasks where labels may depend on the worker's personality, cultural background, opinion, or affective state.",
"Examples include emotion labeling, detecting sarcasm in a tweet, or deciding whether a text constitutes hate speech or not.",
"In evaluation tasks, the worker is presented with data, for example a sentence, tweet, paragraph, or dialogue, and then requested to evaluate and score the data mostly text according to pre-defined criteria, such as fluency, coherence, originality, or structure.",
"These tasks are often used by researchers to evaluate natural language generation (NLG) models.",
"Similar to subjective labeling, 7 We refrain from using the terms annotation and an-notators as these terms are overloaded and often used for non-annotation work.",
"interpretation, values, or beliefs of the worker.",
"Finally, in production tasks, workers are asked to produce their own data, rather than label or evaluate existing data.",
"In the NLP context, this often amounts to text elicitation or text generation.",
"Examples include captioning a photo or video clip, writing a story given a sequence of images, or composing questions and answers.",
"In this category we also include text translation.",
"The produced data is often used for model training or evaluation.",
"While the majority of the studies use only one type of task labeling, evaluation, or production we found that in 10% of the papers that use crowdsourcing, researchers used two or more types of tasks in the same study.",
"The combination of production and evaluation is particularly common; researchers often ask workers to generate data, which in turn is used to train a model; they then use workers to evaluate the model's performance.",
"Although not common, some papers also collect personal information from workers.",
"For example, Yang et al. (2015) and Ding and Pan (2016) conduct personality surveys among its crowdworkers.",
"Prez-Rosas and Mihalcea (2015) collect demographic data from the workers which included their gender, age, country of origin, and education level.",
"Finally, we also found a few papers that add elements of gaming to their crowdsourced tasks, e.g. Niculae and Danescu-Niculescu-Mizil (2016) and Urbanek et al. (2019).",
"Given the increasing use of crowdsourced NLP tasks, how can researchers ensure ethical concerns are reasonably addressed?",
"Should a researcher make a judgement call and decide which tasks pose a risk of harm to the worker, and which are benign?",
"To answer such questions, we will first explore the existing ethical framework used by researchers in the biomedical, social, and behavioral sciences.",
"The roots of contemporary research ethics originate in the 19th century, when researchers made unparalleled discoveries, but also engaged in hazardous, and frequently deadly, experimentation without great concern for the human subjects involved as",
"long as these trials advanced the boundaries of scientific knowledge (Ivy, 1948).",
"The dominant ethics paradigm at the time was largely devoid of now-common principles surrounding therapeutic benefits, scientific validity, full knowledge, or subject consent (Lederer, 1995).",
"Examples include researchers infecting intellectually disabled orphans with gonorrhea, or puncturing a healthy and unaware woman with the nodules of a leper patient to observe the clinical course of these diseases (Shamoo and Resnik, 2009).",
"Such incidents were common before the 1940s, and academia and public discourse were generally ignorant of them and of research ethics in general (Rothman, 1987).",
"The revelation of the Nazi concentration camp experiments at the end of World War II was a watershed moment (Gille-spie, 1989) and led to an early precursor of contemporary research ethics, namely the Nuremberg Code of 1947 (Jonsen, 1998).",
"Not long after, the fallout of the Tuskegee Syphilis Study in the US prompted the formalization of research ethics at American universities (Caplan, 1992).",
"In the study, which took place between 1932 and 1972, a total of 399 socio-economically disadvantaged African-American males with latent syphilis infections were recruited through the false promise of free health-care.",
"Yet, these subjects were actually left without therapy even as effective treatment became available, with the objective to observe the clinical course of the disease (Brandt, 1978).",
"From thereon, beginning with biomedical research, the notion of research ethics has been institutionalized in the US at the university level through institutional review boards (IRBs), as well as national legislation (Israel, 2015).",
"Gradually, research ethics have become a concern also in the social and behavioral sciences, as demonstrated by the 1978 Belmont Report created by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (Jonsen, 2005).",
"This report became the basis of the 1991 Federal Policy for the Protection of Human Subjects in the United States, more commonly known as the Common Rule (Owen, 2006) and superseded by the Final Rule (2018).",
"Chiefly, the Final Rule aims to ensure that the following three basic principles listed in the Belmont Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1978) are met: 1. Respect for persons , which includes the requirement to acknowledge autonomy and the requirement to protect those with diminished autonomy; 2. Beneficence , which mandates considering whether the benefits resulting from the research can outweigh the risks; and 3. Justice , requiring that the burden and benefits of the research are equally shared among potential subjects.",
"The Final Rule is codified in Title 45, Code of Federal Regulations , Part 46 and applies to all government-funded research in the United States (Israel, 2015).",
"Virtually all universities in the United States apply this regulation to human subjects research projects irrespective of funding source (Klitzman, 2015).",
"Specifically, the Final Rule requires that most research involving human subjects receives approval from an IRB.",
"The IRB is a special university-level committee that reviews research proposals to verify that they comply with ethical standards.",
"While it is difficult to assess the effectiveness of IRBs, and the process of ethical review is sometimes criticized as overly bureaucratic which may hamper low-risk social science (Resnik, 2018; Schrag, 2010), glaring ethical lapses have been rare after 1974 (Klitzman, 2015).",
"The three ethical principles outlined by the Belmont Report Respect for persons, Beneficence, and Justice are also the stated principles guiding the actions of more recently formed ethics boards around the world, as well as the underlying principles of relevant policies of intergovernmental organizations, including the Universal Declaration on Bioethics and Human Rights (UNESCO, 2006) and the International Ethical Guidelines for Biomedical Research Involving Human Subjects (Council for International Organizations of Medical Sciences, 2002).",
"Consequently, many countries worldwide have modeled national regulations after the Final Rule or its predecessor, the Common Rule (Capron, 2008).",
"Both regulations have also influenced editorial policies of academic journals (e.g., the Committee on Publication Ethics, 2020).",
"The Final Rule can thus be considered a cross-disciplinary defacto standard for research ethics in Western/US-influenced academic settings (Gontcharov, 2018), including India, Japan, Korea, and Taiwan.",
"Though we find that a global agreement on human-subjects ethics is emerging, countries still vary in the extent to which relevant policies are accepted, framed, implemented, or enforced.",
"These formal rules and institutions have been established to protect the rights and interests of humans subjects involved as participants in scientific research.",
"Thus, we must detemine whether the rules and institutions of research ethics are even applicable to crowdsourced studies.",
"At the core of this determination are two fundamental questions: 1. Are crowdsourcing tasks research ?",
"In the following, we address these two questions.",
"4.1 Are Crowdsourcing Tasks Research ?",
"The Final Rule defines research as follows: Research means a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge.",
"Activities that meet this definition constitute research for purposes of this policy, whether or not they are conducted or supported under a program that is considered research for other purposes.",
"(45 CFR 46.102(l), Final Rule, 2018) From this definition it is evident that rather than a concrete research behavior on part of the NLP researcher, it is the purpose of the research behavior that classifies said behavior as research under the Final Rule .",
"In other words, all categories of crowdsourced tasks summarized in Section 2, i.e., labeling, evaluation, and production, may be considered part of research so long as the intended outcome is to create generalizable knowledge.",
"Typically, this encompasses academic settings where research behavior takes place (course assignments by students being a prominent exception), but does not include research conducted in industry settings (Meyer, 2020; Jackman and Kanerva, 2015).",
"Human subject means a living individual about whom an investigator (whether professional or student) conducting research:",
"(i) Obtains information or biospeci-mens through intervention or interaction with the individual, and, uses, studies, or analyzes the information or biospecimens; or (ii) Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospeci-mens.",
"(45 CFR 46.102(e)(1), Final Rule, 2018) Clearly, if the researcher obtains identifiable private information (IPI) as part of the crowdsourced task e.g. name, date of birth, email address, national identity number, or any other information that identifies the worker then (ii) holds and the worker is considered a human subject.",
"Even if the researcher does not obtain any IPI, the worker may still be considered a human subject under",
"(i) in certain cases.",
"This is because NLP researchers interact with crowdworkers through MTurk or analogous platforms when they publish the task, and obtain information through this interaction.",
"It is also evident that academic NLP researchers make use of this information as they conduct a given study, and hence it is used, studied, or analyzed.",
"If the information is about the crowdworker then they are considered human subjects and",
"(i) is met.",
"However, Final Rule does not expand on what constitutes information about the individual .",
"According to University of Washington, Office of Research (2020), for example, about whom means that the data or information relates to the person. Asking what [crowdworkers] think about something, how they do something, or similar questions usually pertain to the individuals. This is in contrast to questions about factual information not related to the person.",
"Whether the information obtained in an NLP task is about the worker can initially seem like an easy-to-answer question.",
"For example, Benton et al. (2017) write: [R]esearch that requires the annotation of corpora for training models involves human annotators.",
"But since the research does not study the actions of those annotators, the research does not involve human subjects.",
"By contrast, if the goal of the research was to study how humans annotate data, such as to learn about how humans interpret language, then the research may constitute human subjects research.",
"However, we believe that this is not so clear-cut.",
"First, one might argue that although in a POS labeling task we do not obtain information about the worker, other labeling tasks might be harder to classify.",
"For example, when researchers ask a worker to compose a story given a sequence of photos, do they obtain information about the worker?",
"And if so, what kind of information?",
"Similar questions can be asked about tasks related to emotion classification (which might reveal a worker's personality or mood), composing questions and answers (which point to areas of interest and cultural background), or identifying hate speech (which can indicate political orientation).",
"Second, platforms might automatically provide information that can be considered by some to be about the individual.",
"Even in the most benign and objective tasks such as POS tagging, MTurk supplies researchers with information on the amount of time taken to complete each task.",
"This information is sometimes collected and used by NLP researchers (e.g., Sen et al., 2020).",
"In summary, we have shown that in an academic context, NLP crowdsourcing tasks are research , but that the categorization of crowdworkers as human subjects can, in some cases, be a gray area that is open to interpretation.",
"The Final Rule was designed to address ethical issues in medical research, and later in behavioral sciences; lawmakers and experts involved did not anticipate its use in new domains such as crowdsourcing.",
"Therefore, its application to online data collection, and crowdsourcing in particular, can be ambiguous and unsatisfactory.",
"8 Thus, while in some cases the protections and procedures mandated under the Final Rule apply, in others they might not.",
"As a consequence, some NLP crowdsourcing tasks may not require an IRB application, and this may happen even if crowdworkers are at risk.",
"8 Some universities employ the Precautionary Principle and require all crowdsourced-enabled research to go through an IRB application.",
"As we now see, if workers constitute human subjects, an IRB application is required.",
"To clarify any other misconceptions that might be present within the community, we list key points related to the IRB process and dispel misconceptions around them.",
"While not exhaustive, the list can serve both researchers and reviewers.",
"The Final Rule includes provisions for IRB exemptions, and we expect the vast majority of crowdsourced NLP tasks to fall into that category.",
"However, it is crucial to understand that granting a research project an IRB exemption is not the prerogative of the researcher; it is only IRB that hands out exemptions following an initial review: [T]he determination of exempt status (and the type of review that applies) rests with the IRB or with an administration official named by your institution.",
"The determination does not rest with the investigator.",
"Therefore, all projects must be submitted to the IRB for initial review.",
"(American Association for Public Opinion Research, 2020) 5.2 Worker IDs Constitute IPI Researchers often obtain and store worker IDs unique and persistent identifiers assigned to each worker by the crowdsourcing platform even when workers are anonymous.",
"MTurk, for example, assigns each worker a fixed 14-digit string, which is provided to the researcher with completed tasks.",
"A worker ID is part of the worker's account, and is therefore linked to their personal details, including full name, email address, and bank account number.",
"As a consequence, the worker ID constitutes IPI (identifiable private information).",
"If the worker ID is obtained by the researcher, the research mandates an initial IRB review.",
"To avoid obtaining this IPI, researchers can create and store pseudonymized worker IDs, provided that these IDs cannot be mapped back to the original worker IDs.",
"shown in Section 4.2 that crowdsourced NLP tasks involve interaction between researcher and participant through which the data about the worker may be collected, and thus often require an initial review by an IRB.",
"Remuneration of human subjects does not change their status to independent contractors beyond the scope of research ethics.",
"In fact, compensation of human subjects for the time and inconvenience involved in participating is a standard practice espe-cially for research that poses little or no direct bene-fit for the subject and at the same time should not constitute undue inducement to participate, as the University of Virginia, Human Research Protection Program (2020) points out.",
"Some researchers believe that research that will not be published is not subject to an IRB review.",
"For example, Carnegie-Mellon University, Language Technologies Institute (2020) teaches students that Paid labeling does not require IRB approval... [b]ut sometimes you want to discuss results in papers, so consider IRB approval.",
"9 The definition of research in the Final Rule is not contingent on subsequent publication of the results.",
"Given the uncertainties of the peer-review process, whether research eventually finds its way into a publication can often be ascertained only ex post .",
"An important exception not requiring IRB approval is student work as part of course assignments.",
"However, subsequent use of research data collected originally as part of a student assignment is not mentioned in the Final Rule .",
"Consequently, universities handle this situation differently.",
"For example, University of Michigan allows retroactive IRB approval: Class assignments may become subject to this policy... if the faculty member or the students change their plans... application to the IRB for permission to use the data is required.",
"(University of Michigan, Research Ethics and Compliance, 2021) while Winthrop University does not: 9 Moreover, whether the labeling is paid or unpaid is irrelevant; see Section 5.4.",
"IRB approval cannot be granted retroactively, and data may need to be recollected for the project.",
"(University of Winthrop, The Office of Grants and Sponsored Research Development, 2021) 6 Risks and Harms for Crowdworkers Previous work on the ethics of crowdsourcing focused on labor conditions, such as fair pay, and on privacy issues (Fort et al., 2011; Gray and Suri, 2019).",
"However, even when payment is adequate and the privacy of the workers is preserved, there are additional ethical considerations that are often not taken into account, and might put the worker at risk.",
"We propose using the three ethical principles outlined by the Belmont Report Respect for Persons, Beneficence, and Justice to guide the action of researchers.",
"We outline some of the specific risks and harms that might befall NLP task crowdworkers in light of these principles.",
"While the list is not comprehensive, it can serve as a starting point to be used by researchers when planning their crowdsourced task, as well as by reviewers examining the ethical implications of a manuscript or research proposal.",
"NLP researchers are increasingly cognizant that texts can potentially harm readers, as evident by trigger warnings they add to their own papers (e.g., Sap et al., 2020; Nangia et al., 2020; Sharma et al., 2020; Han and Tsvetkov, 2020).",
"Moreover, researchers have long known that annotation work may be psychologically harmful.",
"The Linguistic Data Consortium (LDC), for example, arranged stress-relieving activities for annotators of broadcast news data, following reports of negative psychological impact such as intense irritation, overwhelmed feelings, and task-related nightmares (Strassel et al., 2000).",
"Although literature on the emotional toll on crowdworkers is still scant (Huws, 2015), there is growing literature on the psychological cost of work done by commercial content moderators (Steiger et al., 2021).",
"Crowdworkers deserve similar consideration: while NLP tasks can be as benign as the POS tagging of a children's poem, they may also involve exposure to disturbing textual or visual content.",
"minimal risk to the human subjects involved.",
"According to 45 CFR",
"46.102(j) minimal risk means that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine [...] psychological examinations or tests.",
"Exposing crowdworkers to sensitive content may exceed the threshold of minimal risk if the data that requires labeling or evaluation might be psychologically harmful.",
"The amount of harm (if any) depends on the sensitivity of the worker to the specific type of content.",
"The risk is generally higher in data labeling or text evaluation tasks, where workers might be repeatedly asked to categorize offensive tweets, transcribe violent texts, evaluate hateful text that may expose them to emotional stimuli, or, depending on the content and worker, traumatize them.",
"In some cases, sexual material can be highly offending or shocking and cause an emotional disturbance.",
"Although less likely, hazards can also occur when workers are asked to produce text, since workers are elicited to produce texts based on given input.",
"For example, when users are asked to compose a story based on images, certain images might trigger a harmful response in the worker.",
"A crowdworker might inadvertently or subconsciously expose sensitive information about themselves to the researcher.",
"This is more pronounced in text production, where the responses produced by such tasks reveal as much about the individual workers as they do about the produced text.",
"However, workers also reveal information about themselves when evaluating or labeling text, especially when subjective labeling is in place.",
"Moreover, even seemingly trivial data for example, the elapsed time taken by the worker to label, evaluate, or produce text may contain valuable information about the worker (and this information is automatically captured by MTurk).",
"Table 3 shows the risk level for the different task categories.",
"Moreover, researchers can obtain sensitive information about workers because the crowdsourcing platforms allow screening of workers using built-in qualification attributes including age, financial situation, physical fitness, gender, employment status, purchasing habits, political affiliation, handedness, Task Risk of Exposure of Workers' Category Sensitive Information Objective Labeling Low Subjective Labeling Medium Evaluation Medium Production High Table 3: Potential risk level by task category.",
"marital status, and education level (Amazon Mechanical Turk, 2020).",
"Researchers may also obtain other types of sensitive information by creating their own, arbitrary qualification tests as a quality control measure (Daniel et al., 2018).",
"In summary, the information obtained by researchers may reveal as much about the individual crowdworkers as they do about the data being labeled, evaluated, or produced.",
"Research on crowdsourcing platforms such as MTurk is inherently prone to the inclusion of vulnerable populations.",
"In 45 CFR 46.111(b) the Final Rule (2018) non-exhaustively lists children, prisoners, individuals with impaired decision-making capacity, or economically or educationally disadvantaged persons as vulnerable groups.",
"The Final Rule requires that additional safeguards are implemented to protect these vulnerable populations and makes IRB approval contingent on these safeguards.",
"A great proportion of MTurk crowdworkers are located in developing countries, such as India or Bangladesh, which makes MTurk an attractive proposition to those offering a task (Gray and Suri, 2019), but increases the risk of including economically-disadvantaged persons.",
"Furthermore, it is difficult to ensure that MTurk crowdworkers are above the age of majority, or fall into any other of the defined vulnerable populations (Mason and Suri, 2011).",
"Moreover, given the power imbalances between researchers in industrialized countries and crowdworkers in developing countries, ethical consideration should occur regardless of whether the jurisdiction in which the crowdworkers are located even has a legal framework of research ethics in place, or whether such a local framework meets the standard of the Belmont Report or Final Rule .",
"There is a perception among researchers that crowdworkers are anonymous and thus the issue of privacy is not a concern.",
"This is not the case.",
"Lease et al. (2013), for example, discuss a vulnerability that can expose the identity of an Amazon Mechanical Turk worker using their worker ID a string of 14 letters and digits because the same worker ID is used also as the identifier of the crowdworker's account on other Amazon assets and properties.",
"As a result, a Google search for the worker ID can lead to personal information such as product reviews written by the crowdworker on Amazon.com, which in turn can disclose the worker's identity.",
"Researchers might be unaware of these issues when they make worker IDs publicly available in papers or in datasets.",
"For example, Gao et al. (2015) rank their crowdworkers using MTurk worker IDs in one of the figures.",
"Moreover, breaches of privacy can also occur unintentionally.",
"For example, workers on MTurk are provided with an option to contact the researcher.",
"In this case, their email address will be sent to the researcher, who is inadvertently exposed to further identifiable private information (IPI).",
"We maintain that the anonymity of crowdworkers cannot be automatically assumed or guaranteed, as this is not a premise of the crowdsourcing platform.",
"Graber and Graber (2013) identify another source for harmful effects, which ties in with the risk of psychological harm, and is specific to gamified crowdsourced tasks: a possibility of addiction caused by dopamine release following a reward given during the gamified task.",
"Gamification techniques can be added to data labeling, evaluation, and production.",
"Indeed, some NLP work is using gamification, mostly for data collection (e.g., Kumaran et al., 2014; Ogawa et al., 2020; hman et al., 2018).",
"Moreover, the crowdsourcing platform may add elements of gamification over which the researcher has no control.",
"For example, MTurk recently introduced a Daily Goals Dashboard, where the worker can set game-like HITs Goal and Re-ward Goal, as shown in Figure 1. Figure 1: The Daily Goals Dashboard on MTurk 7 Ways Forward The use of crowdworkers is growing within the NLP community, but the ethical framework set in place to guarantee their ethical treatment (whether de jure or de facto ) did not anticipate the emergence of crowdsourcing platforms.",
"In most crowdsourced NLP tasks, researchers do not intend to gather information about the worker.",
"However, the crowdsourcing platform often autonomously collects such information.",
"As a result, it is often difficult to determine whether crowdworkers constitute human subjects which hinges on whether the researcher collects information about the worker.",
"However, a determination that all crowdworkers are human subjects and thus mandate an IRB approval for government-supported institutions might create a chilling effect and disadvantage university researchers compared to industry-affiliated researchers.",
"The effect is exacerbated in institutions where the ethics committee is heavily bureaucratic and does not offer a streamlined, expedited exemption process for low-to-no risk studies.",
"Whether their crowdsourced task requires IRB application or not, we recommend that the ethics-aware researcher should carefully examine their study in light of the three principles set up by the Belmont Report: Respect for persons, Beneficence, and Justice.",
"And while this mandates fair pay, it is important to note that this is just one of the implications.",
"There are other ethical considerations that are often overlooked in particular risk assessment of causing psychological harm and exposure of sensitive information.",
"Thus, we recommend increasing awareness of the potential ethical implications of crowdsourced NLP tasks.",
"As NLP researchers are now encouraged to add an ethical considerations section to their papers (NAACL, 2020), they should also be encouraged to carefully weigh potential benefits against risks related to the crowdsourced task.",
"We also propose increasing awareness by disseminating relevant knowledge and information.",
"An educational ethics resource created using a community effort could serve as a beneficial first step.",
"Such a resource can include guidelines, checklists, and case studies that are specific to the ethical challenges of crowdsourced tasks in the context of NLP research.",
"We believe that the creation of such a resource can serve as a springboard for a necessary nuanced conversation regarding the ethical use of crowdworkers in the NLP community.",
"We thank the anonymous reviewers for their valuable comments and suggestions which helped improve the paper.",
"This research was partially supported by the Ministry of Science and Technology in Taiwan under grants MOST 108-2221-E-001-012-MY3 and MOST 109-2221-E-001-015[ sic ]."
] | [
"abstain",
"abstain",
"method",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"In this paper, we observe that semi-structured tabulated text is ubiquitous; understanding them requires not only comprehending the meaning of text fragments, but also implicit relationships between them.",
"We argue that such data can prove as a testing ground for understanding how we reason about information.",
"To study this, we introduce a new dataset called INFOTABS, comprising of human-written textual hypotheses based on premises that are tables extracted from Wikipedia info-boxes.",
"Our analysis shows that the semi-structured, multi-domain and heterogeneous nature of the premises admits complex, multi-faceted reasoning.",
"Experiments reveal that, while human annotators agree on the relationships between a table-hypothesis pair, several standard modeling strategies are unsuccessful at the task, suggesting that reasoning about tables can pose a difficult modeling challenge.",
"Recent progress in text understanding has been driven by sophisticated neural networks based on contextual embeddingse.g., BERT (Devlin et al., 2019), and its descendantstrained on massive datasets, such as SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018), and SQuAD (Ra-jpurkar et al., 2016).",
"Several such models outperform human baselines on these tasks on the benchmark suites such as GLUE (Wang et al., 2019b).",
"Reasoning about text requires a broad array of skillsmaking lexical inferences, interpreting the nuances of time and locations, and accounting for world knowledge and common sense.",
"Have we achieved human-parity across such a diverse collection of reasoning skills?",
"In this paper, we study this question by proposing an extension of the natural language inference (NLI) task (Dagan et al., 2005, and others).",
"In Dressage Highestgoverning body International Federation for Equestrian Sports (FEI) Characteristics Contact No Team members Individual and team at international levels Mixed gender Yes Equipment Horse, horse tack Venue Arena, indoor or outdoor Presence Country or region Worldwide Olympic 1912 Paralympic 1996 H1: Dressage was introduced in the Olympic games in 1912.",
"H2: Both men and women compete in the equestrian sport of Dressage.",
"H3: A dressage athlete can participate in both individual and team events.",
"H4: FEI governs dressage only in the U.S. Figure 1: A semi-structured premise (the table).",
"NLI, which asks whether a premise entails, contradicts or is unrelated to a hypothesis, the premise and the hypothesis are one or more sentences.",
"Understanding the premise requires understanding its linguistic structure and reasoning about it.",
"We seek to separate these two components.",
"Our work stems from the observation that we can make valid inferences about implicit information conveyed by the mere juxtaposition of snippets of text, as shown in the table describing Dressage in Figure 1. We introduce the INFOTABS dataset to study and model inference with such semi-structured data.",
"Premises in our dataset consist of info-boxes that convey information implicitly, and thus require complex reasoning to ascertain the validity of hypotheses.",
"For example, determining that the hypothesis H2 in Figure 1 entails the premise table requires looking at multiple rows of the table, understanding the meaning of the row labeled Mixed gender , and also that Dressage is a sport.",
"INFOTABS consists of 23,738 premise-hypothesis pairs, where all premises are info-boxes, and the hypotheses are short sentences.",
"As in the NLI task, the objective is to ascertain whether the premise entails, contradicts or is unrelated to the hypothesis.",
"The dataset has 2,540 unique info-boxes drawn from Wikipedia articles across various categories, and all the hypotheses are written by Amazon's Mechanical Turk workers.",
"Our analysis of the data shows that ascertaining the label typically requires the composing of multiple types of inferences across multiple rows from the tables in the context of world knowledge.",
"Separate verification experiments on subsamples of the data also confirm the high quality of the dataset.",
"We envision our dataset as a challenging testbed for studying how models can reason about semi-structured information.",
"To control for the possibility of models memorizing superficial similarities in the data to achieve high performance, in addition to the standard train/dev/test split, our dataset includes two additional test sets that are constructed by systematically changing the surface forms of the hypothesis and the domains of the tables.",
"We report the results of several families of approaches representing word overlap based models, models that exploit the structural aspect of the premise, and also derivatives of state-of-the-art NLI systems.",
"Our experiments reveal that all these approaches underperform across the three test sets.",
"In summary, our contributions are: 1. We propose a new English natural language inference dataset, INFOTABS, to study the problem of reasoning about semi-structured data.",
"2. To differentiate models' ability to reason about the premises from their memorization of spurious patterns, we created three challenge test sets with controlled differences that employ similar reasoning as the training set.",
"3. We show that several existing approaches for NLI underperform on our dataset, suggesting the need for new modeling strategies.",
"The dataset, along with associated scripts, are available at https://infotabs.github.io/ .",
"We often encounter textual information that is neither unstructured (i.e., raw text) nor strictly",
"structured (e.g., databases).",
"Such data, where a structured scaffolding is populated with free-form text, can range from the highly verbose (e.g., web pages) to the highly terse (e.g. fact sheets, information tables, technical specifications, material safety sheets).",
"Unlike databases, such semi-structured data can be heterogeneous in nature, and not characterized by pre-defined schemas.",
"Moreover, we may not always have accompanying explanatory text that provides context.",
"Yet, we routinely make inferences about such heterogeneous, incomplete information and fill in gaps in the available information using our expectations about relationships between the elements in the data.",
"Understanding semi-structured information requires a broad spectrum of reasoning capabilities.",
"We need to understand information in an ad hoc layout constructed with elements (cells in a table) that are text snippets, form fields or are themselves substructured (e.g., with a list of elements).",
"Querying such data can require various kinds of inferences.",
"At the level of individual cells, these include simple lookup (e.g., knowing that dressage takes place in an arena ), to lexical inferences (e.g., understanding that Mixed Gender means both men and women compete), to understanding types of text in the cells (e.g., knowing that the number 1912 is a year).",
"Moreover, we may also need to aggregate information across multiple rows (e.g., knowing that dressage is a non-contact sport that both men and women compete in ), or perform complex reasoning that combines temporal information with world knowledge.",
"We argue that a true test of reasoning should evaluate the ability to handle such semi-structured information.",
"To this end, we define a new task modeled along the lines of NLI, but with tabular premises and textual hypotheses, and introduce a new dataset INFOTABS for this task.",
"Before describing the new dataset, we will characterize our approach for a successful evaluation of automated reasoning.",
"Recent work has shown that many datasets for NLI contain annotation biases or artifacts (e.g. Poliak et al., 2018).",
"In other words, large models trained on such datasets are prone to learning spurious patternsthey can predict correct labels even with incomplete or noisy inputs.",
"For instance, not and no in a hypothesis are correlated with contradictions (Niven and Kao, 2019).",
"Indeed, classi-fiers trained on the hypotheses only (ignoring the premises completely) report high accuracy; they exhibit hypothesis bias , and achieving a high predictive performance does not need models to discover relationships between the premise and the hypothesis.",
"Other artifacts are also possible.",
"For example, annotators who generate text may use systematic patterns that leak information about the label to a model.",
"Or, perhaps models can learn correlations that mimic reasoning, but only for one domain.",
"With millions of parameters, modern neural networks are prone to overfitting to such imperceptible patterns in the data.",
"From this perspective, if we seek to measure a model's capability to understand and reason about inputs, we cannot rely on a single fixed test set to rank models.",
"Instead, we need multiple test sets (of similar sizes) that have controlled differences from each other to understand how models handle changes along those dimensions.",
"While all the test sets address the same task, they may not all be superficially similar to the training data.",
"With this objective, we build three test sets, named 1 , 2 and 3 .",
"Here, we briefly introduce them; 4 goes into specifics.",
"Our first test set ( 1 ) has a similar distribution as the training data in terms of lexical makeup of the hypotheses and the premise domains.",
"The second, adversarial test set ( 2 ) , consists of examples that are also similar in distribution to the training set, but the hypothesis labels are changed by expert annotators changing as few words in the sentence as possible.",
"For instance, if Album X was released in the 21 st century is an entailment, the sentence Album X was released before the 21 st century is a contradiction, with only one change.",
"Models that merely learn superficial textual artifacts will get confused by the new sentences.",
"For 2 , we rewrite entailments as contradictions and vice versa, while the neutrals are left unaltered.",
"Our third test set is the cross-domain ( 3 ) set, which uses premises from domains that are not in the training split, but generally, necessitate similar types of reasoning to arrive at the entailment decision.",
"Models that overfit domain-specific artifacts will underperform on 3 .",
"Note that, in this work, we describe and introduce three different test sets, but we expect that future work can identify additional dimensions along which models overfit their training data and construct the corresponding test sets.",
"In this section, we will see the details of the construction of INFOTABS.",
"We adapted the general workflow of previous crowd sourcing approaches for creating NLI tasks (e.g., Bowman et al., 2015) that use Amazon's Mechanical Turk.",
"1 Sources of Tables Our dataset is based on 2 , 540 unique info-boxes from Wikipedia articles across multiple categories (listed in Appendix D).",
"We did not include tables that have fewer than 3 rows, or have non-English cells (e.g., Latin names of plants) and technical information that may require expertise to understand (e.g., astronomical details about exoplanets).",
"We also removed non-textual information from the table, such as images.",
"Finally, we simplified large tables into smaller ones by splitting them at sub-headings.",
"Our tables are isomorphic to key-value pairs, e.g., in Figure 1, the bold entries are the keys, and the corresponding entries in the same row are their respective values.",
"Sentence generation Annotators were presented with a tabular premise and instructed to write three self-contained grammatical sentences based on the tables: one of which is true given the table, one which is false, and one which may or may not be true.",
"The turker instructions included illustrative examples using a table and also general principles to bear in mind, such as avoiding information that is not widely known, and avoiding using information that is not in the table (including names of people or places).",
"The turkers were encouraged not to restate information in the table, or make trivial changes such as the addition of words like not or changing numerical values.",
"We refer the reader to the project website for a snapshot of the interface used for turking, which includes the details of instructions.",
"We restricted the turkers to be from English-speaking countries with at least a Master's quali-fication.",
"We priced each HIT (consisting of one table) at 50 .",
"Following the initial turking phase, we removed grammatically bad sentences and rewarded workers whose sentences involved multiple rows in the table with a 10% bonus.",
"Appendix C gives additional statistics about the turkers.",
"1 Appendix A has more examples of tables with hypotheses.",
"per table).",
"2 We partitioned these tables into training, development (Dev), 1 and 2 test sets.",
"To prevent an outsize impact of influential turkers in a split, we ensured that the annotator distributions in the Dev and test splits are similar to that of the training split.",
"We created the 2 test set from hypotheses similar to those in 1 , but from a separate set of tables, and perturbing them as described in 3. On an average, 2 .",
"2 words were changed per sentence to create 2 , with no more than 2 words changing in 72% of the hypotheses.",
"The provenance of 2 ensures that the kinds of reasoning needed for 2 are similar to those in 1 and the development set.",
"For the 3 test set, we annotated 200 additional tables belonging to domains not seen in the training set (e.g., diseases, festivals).",
"As we will see in 5, hypotheses in these categories involve a set of similar types of reasonings as 1 , but with different distributions.",
"In total, we collected 23 , 738 sentences split almost equally among entailments, contradictions, and neutrals.",
"Table 1 shows the number of tables and premise-hypothesis pairs in each split.",
"In all the splits, the average length of the hypotheses is similar.",
"We refer the reader to Appendix D for additional statistics about the data.",
"Validating Hypothesis Quality We validated the quality of the data using Mechanical Turk.",
"For each premise-hypothesis in the development and the test sets, we asked turkers to predict whether the hypothesis is entailed or contradicted by, or is unrelated to the premise table.",
"We priced this task at 36 for nine labels.",
"2 For tables with ungrammatical sentences, we repeated the HIT.",
"As a result, a few tables in the final data release have more than 9 hypotheses.",
"inter-annotator agreement scores with Cohen's Kappa scores (Artstein and Poesio, 2008) between 0 .",
"75 and 0 .",
"80 .",
"In addition, we see a majority agreement (at least 3 out of 5 annotators agree) of range between 93 % and 97 % .",
"Furthermore, the human accuracy agreement between the majority and gold label (i.e., the label intended by the writer of the hypothesis), for all splits is in range 80 % to 84 % , as expected given the difficulty of the task.",
"To study the nature of reasoning that is involved in deciding the relationship between a table and a hypothesis, we adapted the set of reasoning categories from GLUE (Wang et al., 2019b) to table premises.",
"For brevity, here we will describe the categories that are not in GLUE and defined in this work for table premises.",
"Appendix B gives the full list with definitions and examples.",
"Simple look up refers to cases where there is no reasoning and the hypothesis is formed by literally restating what is in the table as a sentence; multi-row reasoning requires multiple rows to make an inference; and subjective/out-of-table inferences involve value judgments about a proposition or reference to information out of the table that is neither well known or common sense.",
"All definitions and their boundaries were verified via several rounds of discussions.",
"Following this, three graduate students independently annotated 160 pairs from the Dev and 3 test sets each, and edge cases were adjudicated to arrive at consensus labels.",
"Figures 2a and 2b summarizes these annotation efforts.",
"We see that we have a multifaceted complex range of reasoning types across both sets.",
"Importantly, we observe only a small number of simple lookups, simple negations for contradictions, and mere syntactic alternations that can be resolved without complex reasoning.",
"Many instances call for looking up multiple rows, and involve temporal and numerical reasoning.",
"Indeed, as Figures 2c and 2d show, a large number of examples need at least two distinct kinds of reasoning; on an average, sentences in the Dev and 3 sets needed 2.32 and 1.79 different kinds of reasoning, respectively.",
"We observe that semi-structured premises forced annotators to call upon world knowledge and common sense (KCS); 48 .",
"75% instances in the Dev set require KCS.",
"(In comparison, in the MultiNLI data, KCS is needed in 25 . 72% of examples.)",
"We conjecture that this is because information about the entities and their types is not explicitly stated in tables, and have to be inferred.",
"To do so, our annotators relied on their knowledge about the world including information about weather, seasons, and widely known social and cultural norms and facts.",
"An example of such common sense is the hypothesis that X was born in summer for a person whose date of birth is in May in New York.",
"We expect that the INFOTABS data can serve as a basis for studying common sense reasoning alongside other recent work such as that of Talmor et al. (2019), Neutral hypotheses are more inclined to being subjective/out-of-table because almost anything subjective or not mentioned in the table is a neutral statement.",
"Despite this, we found that in all evaluations in Appendix E (except those involving the adversarial 2 test set), our models found neutrals almost as hard as the other two labels, with only an 3% gap between the F-scores of the neutral label and the next best label.",
"The distribution of train, dev, 1 and 2 are similar because the premises are taken from the same categories.",
"However, tables for 3 are from different domains, hence not of the same distribution as the previous splits.",
"This difference is also re-flected in Figures 2a and 2b, as we see a different distribution of reasonings for each test set.",
"This is expected; for instance, we cannot expect temporal reasoning from tables in a domain that does not contain temporal quantities.",
"The goal of our experiments is to study how well different modeling approaches address the INFOTABS data, and also to understand the impact of various artifacts on them.",
"First, we will consider different approaches for representing tables in ways that are amenable to modern neural models.",
"A key aspect of the INFOTABS task that does not apply to the standard NLI task concerns how premise tables are represented.",
"As baselines for future work, let us consider several different approaches.",
"1. Premise as Paragraph (Para) : We convert the premise table into paragraphs using fixed template applied to each row.",
"For a table titled t , a row with key k and value v is written as the sentence The k of t are v .",
"For example, for the table in Figure 1, the row with key Equipment gets mapped to the sentence The equipment of Dressage are horse, horse tack.",
"We have a small number of exceptions: e.g., if the key is born or died , we use the following template: t was k on v .",
"The sentences from all the rows in the table are concatenated to form the premise paragraph.",
"While this approach does not result in grammatical sentences, it fits the interface for standard sentence encoders.",
"2. Premise as Sentence (Sent): Since hypotheses are typically short, they may be derived from a small subset of rows.",
"Based on this intuition, we use the word mover distance (Kus-ner et al., 2015) to select the closest and the three closest sentences to the hypothesis from the paragraph representation (denoted by WMD-1 and WMD-3, respectively).",
"3. Premise as Structure 1 (TabFact) : Following Chen et al. (2020), we represent tables by a sequence of key : value tokens.",
"Rows are separated by a semi-colon and multiple values for the same key are separated by a comma.",
"4. Premise as Structure 2 (TabAttn) : To study an attention based approach, such as that of Parikh et al. (2016), we convert keys and values into a contextually enriched vectors by first converting them into sentences using the Para approach above, and applying a contextual encoder to each sentence.",
"From the token embeddings, we obtain the embeddings corresponding of the keys and values by mean pooling over only those tokens.",
"Based on the various representations of tables described above, we developed a collection of models for the table inference problem, all based on standard approaches for NLI.",
"Due to space constraints, Reasoning Types N u m be r o f E x a m p l e s 0 20 40 60 80 C o r e f E lli p s i s E n t i t y T y p e KCSL e x i c a l R e a s o n i n g M u l t i r o w N a m e d E n t i t y N e g a t i o n N u m e r i c a l Q u a n t i f i c a t i o n S i m p l e L o o k u p S u b j e c t i v e / O OTS y n t a c t i c A l t e r n a t i o n T e m p o r a l Contradiction NeutralEntailment",
"(a) Number of examples per reasoning type in the Dev set Reasoning Types N u m be r o f E x a m p l e s 0 20 40 60 80 C o r e f E lli p s i s E n t i t y T y p e KCSL e x i c a l R e a s o n i n g M u l t i r o w N a m e d E n t i t y N e g a t i o n N u m e r i c a l Q u a n t i f i c a t i o n S i m p l e L o o k u p S u b j e c t i v e / O OTS y n t a c t i c A l t e r n a t i o n T e m p o r a l Contradiction NeutralEntailment",
"(d) Number of reasonings per example in the 3 set Figure 2: Distribution of the various kinds of reasoning in the Dev and 3 sets.",
"we give a brief description of the models here and refer the interested reader to the code repository for implementation details.",
"For experiments where premises are represented as sentences or paragraphs, we evaluated a feature-based baseline using unigrams and bigrams of tokens.",
"For this model (referred to as SVM ), we used the LibLinear library (Fan et al., 2008).",
"For these representations, we also evaluated a collection of BERT-class of models.",
"Following the standard setup, we encoded the premise-hypothesis pair, and used the classification token to train a classifier, specifically a two-layer feedforward network that predicts the label.",
"The hidden layer had half the size of the token embeddings.",
"We compared RoBERTa L (Large), RoBERTa B (Base) and BERTB (Base) in our experiments.",
"We used the above BERT strategy for the TabFact representations as well.",
"For the TabAttn representations, we implemented the popular decomposable attention model (Parikh et al., 2016) using the premise key-value embeddings and hypothesis token embeddings with 512 dimensional attend and compare layers.",
"We implemented all our models using the Py-Torch with the transformers library (Wolf et al., 2019).",
"We trained our models using Adagrad with a learning rate of 10 4 , chosen by preliminary experiments, and using a dropout value of 0.2.",
"All our results in the following sections are averages of models trained from three different random seeds.",
"Does our dataset exhibit hypothesis bias?",
"Before we consider the question of whether we can model premise-hypothesis relationships, let us first see if a model can learn to predict the entailment label without using the premise, thereby exhibiting an undesirable artifact.",
"We consider three classes of models to study hypothesis bias in INFOTABS.",
"Hypothesis Only (hypo-only): The simplest way to check for hypothesis bias is to train a classifier using only the hypotheses.",
"Without a premise, a classifier should fail to correlate the hypothesis and the label.",
"We represent the hypothesis in two ways",
"a) using unigrams and bigrams for an SVM, and",
"b) using a single-sentence BERT-class model.",
"The results of the experiments are given in Table 3. Model Dev 1 2 3 Majority 33.33 33.33 33.33 33.33 SVM 59.00 60.61 45.89 45.89 BERTB 62.69 63.45 49.65 50.45 RoBERTa B 62.37 62.76 50.65 50.8 RoBERTa L 60.51 60.48 48.26 48.89 Table 3: Accuracy of hypothesis-only baselines on the INFOTABS Dev and test sets Dummy or Swapped Premise: Another approach to evaluate hypothesis bias is to provide an unrelated premise and train a full entailment model.",
"We evaluated two cases, where every premise is changed to a",
"(a) dummy statement ( to be or not to be ), or",
"(b) a randomly swapped table that is represented as paragraph.",
"In both cases, we trained a RoBERTa L classifier as described in 6.2.",
"The results for these experiments are presented in Table 4. Premise Dev 1 2 3 dummy 60.02 59.78 48.91 46.37 swapped 62.94 65.11 52.55 50.21 Table 4: Accuracy with dummy/swapped premises Results and Analysis: Looking at the Dev and 1 columns of Tables 3 and 4, we see that these splits do have hypothesis bias.",
"All the BERT-class models discover such artifacts equally well.",
"However, we also observe that the performance on 2 and 3 data splits is worse since the artifacts in the training data do not occur in these splits.",
"We see a performance gap of 12% as compared to Dev and 1 splits in all cases.",
"While there is some hypothesis bias in these splits, it is much less pronounced.",
"An important conclusion from these results is that the baseline for all future models trained on these splits should be the best premise-free performance.",
"From the results here, these correspond to the swapped setting.",
"How do trained NLI systems perform on our dataset?",
"Given the high leaderboard accuracies of trained NLI systems, the question of whether these models can infer entailment labels using a linearization of the tables arises.",
"To study this, we trained RoBERTa L models on the SNLI and MultiNLI datasets.",
"The SNLI model achieves an accuracy of 92.56 % on SNLI test set.",
"The MultiNLI model achieves an accuracy of 89.0 % on matched and 88.99 % on the mismatched MultiNLI test set.",
"We evaluate these models on the WMD-1 and the Para representations of premises.",
"Results and Analysis: In Table 5, all the results point to the fact that pre-trained NLI systems do not perform well when tested on INFOTABS.",
"We observe that full premises slightly improve performance over the WMD-1 ones.",
"This might be due to",
"a) ineffectiveness of WMD to identify the correct premise sentence, and",
"b) multi-row reasoning.",
"Does training on the paragraph/sentence representation of a premise help?",
"The next set of experiments compares BERT-class models and SVM trained using the paragraph (Para) and sentence (WMD-n) representations.",
"The results for these experiments are presented in Table 6. Premise Dev 1 2 3 Train with SVM Para 59.11 59.17 46.44 41.28 Train with BERTB Para 63.00 63.54 52.57 48.17 Train with RoBERTa B Para 67.2 66.98 56.87 55.36 Train with RoBERTa L WMD-1 65.44 65.27 57.11 52.55 WMD-3 72.55 70.38 62.55 61.33 Para 75.55 74.88 65.55 64.94 Table 6: Accuracy of paragraph and sentence premise representation reported on SVM, BERTB , RoBERTa B and RoBERTa L Results and Analysis: We find that training with the INFOTABS training set improves model performance significantly over the previous baselines, except for the simple SVM model which relies on unigrams and bigrams.",
"We see that RoBERTa L outperforms its base variant and BERTB by around 9% and 14% respectively.",
"Similar to the earlier observation, providing full premise is better than selecting a subset of sentences.",
"Importantly, 2 and 3 performance is worse than 1 , not only suggesting the difficulty of these data splits, but also showing that models overfit both lexical patterns (based on 2 ) or domain-specific patterns (based on 3 ).",
"Does training on premise encoded as structure help?",
"Rather than linearizing the tables as sentences, we can try to encode the structure of the tables.",
"We consider two representative approaches for this, TabFact and TabAttn, each associated with a different model as described in 6.2.",
"The results for these experiments are listed in Table 7. Premise Dev 1 2 3 Train with BERTB TabFact 63.67 64.04 53.59 49.05 Train with RoBERT B TabFact 68.06 66.7 56.87 55.26 Train with RoBERTa L TabAttn 63.63 62.94 49.37 49.04 TabFact 77.61 75.06 69.02 64.61 Table 7: Accuracy on structured premise representation reported on BERTB , RoBERTa B and RoBERTa L Results and Analysis: The idea of using this family of models was to leverage the structural aspects of our data.",
"We find that the TabAttn model, however, does not improve the performance.",
"We assume that this might be due to the bag of words style of representation that the classifier employs.",
"We find, however, that providing premise structure information helps the TabFact model perform better than the RoBERTa L +Para model.",
"As before model performance drops for 2 and 3 .",
"How many types of reasoning does a trained system predict correctly?",
"Using a RoBERTa L , which was trained on the paragraph (Para) representation, we analyzed the examples in Dev and 3 data splits that were annotated by experts for their types of reasoning ( 5).",
"Figure 3 shows the summary of this analysis.",
"Results and Analysis: Figures 3a and 3b show the histogram of reasoning types among correctly predicted examples.",
"Compared to Figures 2a and 2b, we see a decrease in correct predictions across all reasoning types for both Dev and 3 sets.",
"In particular, in the Dev set, the model performs poorly for the knowledge & common sense, multi-row, coreference, and temporal reasoning categories.",
"Discussion Our results show that: 1) INFOTABS contains a certain amount of artifacts which transformer-based models learn, but all models have a large gap to human performance; and 2) models accuracies drop on 2 and 3 , suggesting that all three results together should be used to characterize the model, and not any single one of them.",
"All our models are significantly worse than the human performance ( 84 . 04% , 83 . 88% and 79 . 33% for 1 , 2 and 3 respectively).",
"With a difference of 14% between our best model and the human performance, these results indicate that INFOTABS is a challenging dataset.",
"NLI Datasets Natural language inference/textual entailment is a well studied text understanding task, and has several datasets of various sizes.",
"The annual PASCAL RTE challenges (Dagan et al., 2005, inter alia) were associated with several thousands of human-annotated entailment pairs.",
"The SNLI dataset (Bowman et al., 2015) is the first large scale entailment dataset that uses image captions as premises, while the MultiNLI (Williams et al., 2018) uses premises from multiple domains.",
"The QNLI and WNLI datasets provide a new perspective by converting the SQuAD question answering data (Rajpurkar et al., 2016) and Winograd Schema Challenge data (Levesque et al., 2012) respectively into inference tasks.",
"More recently, SciTail (Khot et al., 2018) and Adversarial NLI (Nie et al., 2019) have focused on building adversarial datasets; the former uses information retrieval to select adversarial premises, while the latter uses iterative annotation cycles to confuse models.",
"Reasoning Recently, challenging new datasets have emerged that emphasize complex reasoning.",
"Bhagavatula et al. (2020) pose the task of determining the most plausible inferences based on observation (abductive reasoning).",
"Across NLP, a lot of work has been published around different kinds of reasonings.",
"To name a few, common sense (Talmor et al., 2019), temporal (Zhou et al., 2019), numerical (Naik et al., 2019; Wallace et al., 2019b) and Reasoning Types N u m be r o f C o rr e c t P r ed i c t i on 0 20 40 60 80 C o r e f E lli p s i s E n t i t y T y p e KCSL e x i c a l R e a s o n i n g M u l t i r o w N a m e d E n t i t y N e g a t i o n N u m e r i c a l Q u a n t i f i c a t i o n S i m p l e L o o k u p S u b j e c t i v e / O OTS y n t a c t i c A l t e r n a t i o n T e m p o r a l Contradiction NeutralEntailment",
"multi-hop (Khashabi et al., 2018) reasoning have all garnered immense research interest.",
"Tables and Semi-structured data Tasks based on semi-structured data in the form of tables, graphs and databases (with entries as text) contain complex reasoning (Dhingra et al., 2019; Chen et al., 2020).",
"Previous work has touched upon semantic parsing and question answering (e.g., Pasupat and Liang, 2015; Khashabi et al., 2016, and references therein), which typically work with tables with many entries that resemble database records.",
"Our work is most closely related to TabFact (Chen et al., 2020), which considers database-style tables as premises with human-annotated hypotheses to form an inference task.",
"While there are similarities in the task formulation scheme, our work presents an orthogonal perspective:",
"(i) The Wikipedia tables premises of TabFact are homogeneous, i.e., each column in a table has structural redundancy and all entries have the same type.",
"One can look at multiple entries of a column to infer extra information, e.g., all entries of a column are about locations.",
"On the contrary, the premises in our dataset are heterogeneous.",
"(ii) TabFact only considers entailment and contradiction; we argue that inference is non-binary with a third unde-termined class (neutrals).",
"(iii) Compared to our multi-faceted reasonings, the reasonings of the hypotheses in TabFact are limited and mostly numerical or comparatives.",
"(iv) The 2 and 3 sets help us check for annotation and domain-specific artifacts.",
"Artifacts Recently, pre-trained transformer-based models (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019, and others) have seemingly outperformed human performance on several NLI tasks (Wang et al., 2019b,a).",
"However, it has been shown by Poliak et al. (2018); Niven and Kao (2019); Gururangan et al. (2018); Glockner et al. (2018); Naik et al. (2018); Wallace et al. (2019a) that these models exploit spurious patterns (artifacts) in the data to obtain good performance.",
"It is imperative to produce datasets that allow for controlled study of artifacts.",
"A popular strategy today is to use adversarial annotation (Zellers et al., 2018; Nie et al., 2019) and rewriting of the input (Chen et al., 2020).",
"We argue that we can systematically construct test sets that can help study artifacts along specific dimensions.",
"We presented a new high quality natural language inference dataset, INFOTABS, with heterogeneous semi-structured premises and natural language hypotheses.",
"Our analysis showed that our data encompasses several different kinds of inferences.",
"INFOTABS has multiple test sets that are designed to pose difficulties to models that only learn superficial correlations between inputs and the labels, rather than reasoning about the information.",
"Via extensive experiments, we showed that derivatives of several popular classes of models find this new inference task challenging.",
"We expect that the dataset can serve as a testbed for developing new kinds of models and representations that can handle semi-structured information as first class citizens.",
"We thank members of the Utah NLP group for their valuable insights and suggestions at various stages of the project; and reviewers their helpful comments.",
"We acknowledge the support of the support of NSF Grants No. 1822877 and 1801446, and a generous gift from Google."
] | [
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"result",
"result",
"objective",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"method",
"objective",
"result",
"abstain",
"objective",
"objective",
"other",
"other"
] |
[
"A recent advance in monolingual dependency parsing is the idea of a treebank embedding vector, which allows all treebanks for a particular language to be used as training data while at the same time allowing the model to prefer training data from one treebank over others and to select the preferred treebank at test time.",
"We build on this idea by",
"1) introducing a method to predict a treebank vector for sentences that do not come from a treebank used in training, and",
"2) exploring what happens when we move away from predefined treebank embedding vectors during test time and instead devise tailored interpolations.",
"We show that",
"1) there are interpolated vectors that are superior to the predefined ones, and",
"2) treebank vectors can be predicted with sufficient accuracy, for nine out of ten test languages, to match the performance of an oracle approach that knows the most suitable predefined treebank embedding for the test set.",
"The Universal Dependencies project (Nivre et al., 2016) has made available multiple treebanks for the same language annotated according to the same scheme, leading to a new wave of research which explores ways to use multiple treebanks in monolingual parsing (Shi et al., 2017; Sato et al., 2017; Che et al., 2017; Stymne et al., 2018).",
"Stymne et al. (2018) introduced a treebank embedding .",
"A single model is trained on the concatenation of the available treebanks for a language, and the input vector for each training token includes the treebank embedding which encodes the treebank the token comes from.",
"At test time, all input vectors in the test set of the same treebank are also assigned this treebank embedding vector.",
"Stymne et al. (2018) show that this approach is superior to mono-treebank training and to plain treebank concatenation.",
"Treebank embeddings perform at about the same level as training on multiple treebanks and tuning on one, but they argue that a treebank embedding approach is preferable since it results in just one model per language.",
"What happens, however, when the input sentence does not come from a treebank?",
"Stymne et al. (2018) simulate this scenario with the Parallel Universal Dependency (PUD) test sets.",
"They define the notion of a proxy treebank which is the treebank to be used for a treebank embedding when parsing sentences that do not come from any of the training treebanks.",
"They empirically determine the best proxy treebank for each PUD test set by testing with each treebank embedding.",
"However, the question remains what to do with sentences for which no gold parse is available, and for which we do not know the best proxy.",
"We investigate the problem of choosing treebank embedding vectors for new, possibly out-of-domain, sentences.",
"In doing so, we explore the usefulness of interpolated treebank vectors which are computed via a weighted combination of the predefined fixed ones.",
"In experiments with Czech, English and French, we establish that useful interpolated treebank vectors exist.",
"We then develop a simple k-NN method based on sentence similarity to choose a treebank vector, either fixed or interpolated, for sentences or entire test sets, which, for 9 of our 10 test languages matches the performance of the best (oracle) proxy treebank.",
"Following recent work in neural dependency parsing (Chen and Manning, 2014; Ballesteros et al., 2015; Kiperwasser and Goldberg, 2016; Zeman et al., 2017, 2018), we represent an input token by concatenating various vectors.",
"In our experiments, each word w i in a sentence S = ( w 1 ,..., w n ) is a concatenation of",
"1) a dynamically learned word vector,",
"2) a word vector obtained by passing the k i characters of w i through a BiLSTM and 3), following Stymne et al. (2018), a treebank embedding to distinguish the m training treebanks: e ( i ) = e 1 ( w i ) biLSTM ( e 2 ( ch i, 1 ) , ..., e 2 ( ch i,k i )) f (1) Stymne et al. (2018) use f = e 3 ( t (cid:63) ) (2) where t (cid:63) 1 , ..., m is the source treebank for sentence S or if S does not come from one of the m treebanks, a choice of one of these (the proxy treebank).",
"We change f during test time to f = m (cid:88) t =1 t e 3 ( t ) (3) where there are m treebanks for the language in question and (cid:80) mt =1 t = 1 .",
"For all experiments, we use UD v2.3 (Nivre et al., 2018).",
"We choose Czech, English and French as our development languages because they each have four treebanks (excluding PUD), allowing us to train on three treebanks and test on a fourth.",
"For testing, we use the PUD test sets for languages for which there are at least two other treebanks with training data: Czech, English, Finnish, French, Italian, Korean, Portuguese, Russian, Spanish and Swedish.",
"Following Stymne et al. (2018), we use the transition-based parser of de Lhoneux et al. (2017) with the token input representations as Eq.",
"1 above.",
"Source code of our modified parser and helper scripts to carry out the experiments are available online.",
"1 4 Are Interpolated Treebank Vectors Useful?",
"We attempt to ascertain how useful interpolated treebank embedding vectors are by examining the labelled attachment score (LAS) of trees parsed with different interpolated treebank vectors.",
"For each of our three development languages, we train multi-treebank parsing models on the four combinations of three of the four available treebanks and we test each model on the development sets 1 https://github.com/jowagner/ tbev-prediction Figure 1: LAS in the treebank vector weight space ( m = 3 ) for cs cltt+fictree+pdt on cs cac-dev with the second seed.",
"Since m = 3 and (cid:80) mt =1 t = 1 , all treebank vectors lie in a plane and we can visualise LAS results in colour plots.",
"As the treebank vectors can have arbitrary distances, we plot (and sample) in the weight space R m .",
"We include the equilateral triangle spanned by the three fixed treebank embedding vectors in our plots.",
"Points outside the triangle can be reached by allowing negative weights t < 0 .",
"We obtain treebank LAS and sentence-level LAS for 200 weight vectors sampled from the weight space, including the corners of the triangle, and repeat with different seeds for parameter initialisation and training data shuffling.",
"Rather than sampling at random, points are chosen so that they are somewhat symmetrical and evenly distributed.",
"Figure 1 shows the development set LAS on cs cac-dev for a model trained on cs cltt+fictree+pdt with the second seed.",
"We create 432 such plots for nine seeds, four training configurations, four development sets and three languages.",
"The patterns vary with each seed and configuration.",
"The smallest LAS range within a plot is 87.8 to 88.3 ( cs cac+cltt+pdt on cs pdt with the seventh seed).",
"The biggest LAS range is 59.7 to 76.8 ( fr gsd+sequoia+spoken on fr spoken with the fifth seed).",
"The location of the fixed treebank vectors e 3 ( t ) are at the corners of the triangle in each graph.",
"For in-domain settings one or two corners usually have LAS close to the highest LAS in the plot.",
"The 2 An in-domain example is testing a model trained on cs cac+cltt+fictree on cs cac , and an out-of-domain example is testing the same model on cs pdt .",
"best LAS scores (black circles), however, are often located outside the triangle, i.",
"e.",
"negative weights are needed to reach it.",
"Turning to sentence-level LAS, Figure 2 shows the LAS for an individual example sentence rather than an entire development set.",
"This sentence is taken from en partut-dev and is parsed with a model trained on en ewt+gum+lines .",
"For this 28-token sentence, LAS can only change in steps of 1/28 and 34 of the 200 treebank embedding weight points share the top score.",
"Negative weights are needed to reach these points outside the triangle.",
"Over all development sentences and parsing models, an interpolated treebank vector achieves highest LAS for 99.99% of sentences: In 78.07% of cases, one of the corner vectors also achieves the highest LAS and in the remaining 21.92%, interpolated vectors are needed.",
"It is also worth noting that, for 39% of sentences, LAS does not depend on the treebank vectors at all, at least not in the weight range explored.",
"Often, LAS changes from one side to another side of the graph.",
"The borders have different orientation and sharpness.",
"The fraction of points with highest LAS varies from few to many.",
"The same is true for the fraction of points with lowest LAS.",
"Noise seems to be low.",
"Most data points match the performance of their neighbours, i.",
"e.",
"the scores are not sensitive to small changes of the treebank weights, suggesting that the observed differences are not just random numerical effects.",
"This preliminary analysis suggests that useful interpolated treebank vectors do exist.",
"Our next step is to try to predict them.",
"In all subsequent experiments, we focus on the out-of-domain setting, i.",
"e.",
"each multi-treebank model is tested on a treebank not included in training.",
"We use k -nearest neighbour ( k -NN) classification to predict treebank embedding vectors for an individual sentence or a set of sentences at test time.",
"We experiment with",
"1) allocating the treebank vector for an input sentence using the k most similar training sentences ( se-se ), and",
"2) allocating the treebank vector for a set of input sentences using the most similar training treebank ( tr-tr ).",
"We will first explain the se-se case.",
"For each input sentence, we retrieve from the training data the k most similar sentences and then identify the treebank vectors from the candidate samples that have the highest LAS.",
"To compute similarity, we represent sentences either as tf-idf vectors computed over character n-grams, or as vectors produced by max-pooling over a sentence's ELMo vectors (Peters et al., 2018) produced by averaging all ELMo biLM layers.",
"3 We experiment with k = 1 , 3 , 9 .",
"For many sentences, several treebank vectors yield the optimal LAS for the most similar retrieved sentence(s), and so we try several tie-breaking strategies, including choosing the vector closest to the uniform weight vector (i. e. each of the three treebanks is equally weighted), re-ranking the list of vectors in the tie according to the LAS of the next most similar sentence, and using the average LAS of the k sentences retrieved to choose the treebank vector.",
"Three treebank vector sample sizes were tried:",
"1. fixed : Only the three fixed treebank vectors, i.",
"e.",
"the corners of the triangle in Fig.",
"1.",
"2. t 0 : Negative weights are not used in the interpolation, i.",
"e.",
"only the 32 points inside or on the triangle in Fig.",
"1.",
"3. any : All 200 weight points shown in Fig.",
"1. When retrieving treebanks ( tr-tr ), we use the average of the treebank's sentence representation vectors as the treebank representation and we normalise the vectors to the unit sphere as otherwise the size of the treebank would dominate the location in vector space.",
"We include oracle versions of each k-NN model in our experiments.",
"The k-NN oracle method is different from the normal k-NN method in that the test data is added to the training data so that the test data itself will be retrieved.",
"This means that a 3 We use ELMoForManyLangs (Che et al., 2018).",
"k-NN oracle with k = 1 knows exactly what treebank vector is best for each test item while a basic k-NN model has to predict the best vector based on the training data.",
"In the tr-tr setting, our k-NN classifier is selecting one of three treebanks for the fourth test treebank.",
"In the oracle k-NN setting, it selects the test treebank itself and parses the sentences in that treebank with its best-performing treebank vector.",
"When the treebank vector sample space is limited to the vectors for the three training treebanks (fixed), this method is the same as the best-proxy method of Stymne et al. (2018).",
"The development results, averaged over the four development sets for each language, are shown in Tables 1 and",
"2. 4 As discussed above, upper bounds for k -NN prediction are calculated by including an oracle setting in which the query item is added to the set of items to be retrieved, and k restricted to",
"1. We are also curious to see what happens when an equal combination of the three fixed vectors (uni-form weight vector) is used ( equal ), and when treebank vectors are selected at random.",
"Table 1 shows the se-se results.",
"The top section shows the results of randomly selecting a sentence's treebank vector, the middle section shows the k -NN results and the bottom section the oracle k -NN results.",
"The k -NN predictor clearly outperforms the random predictor for English and French, but not for Czech, suggesting that the treebank vector itself plays less of a role for Czech, perhaps due to high domain overlap between the treebanks.",
"The 4 To reduce noise from random initialisation, we parse each development set nine times with nine different seeds and use the median LAS.",
"oracle k -NN results indicate not only the substantial room for improvement for the predictor, but also the potential of interpolated vectors since the results improve as the sample space is increased beyond the three fixed vectors.",
"Table 2 shows the tr-tr results.",
"The first section is the proxy treebank embedding of Stymne et al. (2018) where one of the fixed treebank vectors is used for parsing the development set.",
"We report the bestand worst-performing of the three ( proxy-best and proxy-worst ).",
"The k -NN methods are shown in the second section of Table",
"2. The first row of this section ( fixed weights) can be directly compared with the proxy-best .",
"For Czech and French, the k -NN method matches the performance of proxy-best .",
"For English, it comes close.",
"Examining the per-treebank English results, k -NN predicts the best proxy treebank for all but en partut , where it picks the second best ( en gum ) instead of the best ( en ewt ).",
"The oracle k -NN results are shown in the third section of Table",
"2. 5 Although less pronounced than for the more difficult se-se task, they indicate that there is still some room for improving the vector predictor at the document level if interpolated vectors are considered.",
"Our equal method, that uses the weights ( 1 3 , 1 3 , 1 3 ), is shown in the last row of Table",
"2. It is the overall best English model.",
"Our best model for Czech is a tr-tr model which just selects from the three fixed treebank vectors.",
"For French, the best is a tr-tr model which selects from interpolated vectors with positive weights.",
"For the PUD languages not used in development, we se-5 Recall that the first method in this section, oracle fixed , is the same method as proxy-best .",
"lect the hyper-parameters based on average LAS on all 12 development sets.",
"The resulting generic hyper-parameters are the same as those for the best French model: tr-tr with interpolated vectors and positive weights.",
"6 The PUD test set results are shown in Table",
"3. For nine out of ten languages we match the oracle method proxy-best within a 95% confidence interval.",
"7 For Russian, the treebank vector of the second-best proxy treebank is chosen, falling 0.8 LAS points behind.",
"Still, this difference is not sig-nificant (p=0.055).",
"For English, the generic model also picks the second-best proxy treebank.",
"8 7 Conclusion In experiments with Czech, English and French, we investigated treebank embedding vectors, exploring the ideas of interpolated vectors and vector weight prediction.",
"Our attempts to predict good vector weights using a simple regression model yielded encouraging results.",
"Testing on PUD languages, we match the performance of using the best fixed treebank embedding vector in nine of ten cases within the bounds of statistical significance and in five cases exactly match it.",
"6 While the k -NN models selected for final testing use char-n -gram-based sentence representations, ELMo representations are competitive.",
"7 Statistical significance is tested with udapi-python ( https://github.com/udapi/udapi-python ).",
"8 For Korean PUD, LAS scores are surprisingly low given that development results on ko gsd and ko kaist are above 76.5 for all seeds.",
"A run with a mono-treebank model confirms low performance on Korean PUD.",
"According to a reviewer, there are known differences in the annotation between the Korean UD treebanks.",
"On the whole, it seems that our predictor is not yet good enough to find interpolated treebank vectors that are clearly superior to the basic, fixed vectors and that we know to exist from the oracle runs.",
"Still, we think it is encouraging that performance did not drop substantially when the set of candidate vectors was widened ( t 0 and any').",
"We do not think the superior treebank vectors found by the oracle runs are simply noise, i.",
"e.",
"model fluctuations due to varied inputs, because the LAS landscape in the weight vector space is not noisy.",
"For individual sentences, LAS is usually constant in large areas and there are clear, sharp steps to the next LAS level.",
"Therefore, we think that there is room for improvement for the predictor to find interpolated vectors which are better than the fixed ones.",
"We plan to explore other methods to predict treebank vectors, e.",
"g.",
"neural sequence modelling, and to apply our ideas to the related task of language embedding prediction for zero-shot learning.",
"Another area for future work is to explore what information treebank vectors encode.",
"The previous work on the use of treebank vectors in monoand multi-lingual parsing suggests that treebank vectors encode information that enables the parser to select treebank-specific information where needed while also taking advantage of treebank-independent information available in the training data.",
"The type of information will depend on the selection of treebanks, e.",
"g.",
"in a polyglot setting the vector may simply encode the language, and in a monolingual setting such as ours it may encode annotation or domain differences between the treebanks.",
"Interpolating treebank vectors adds a layer of opacity, and, in future work, it would be interesting to carry out experiments with synthetic data, e.",
"g.",
"varying the number of unknown words, to get a better understanding of what they may be capturing.",
"Future work should also test even simpler strategies which do not use the LAS of previous parses to gauge the best treebank vector, e.",
"g.",
"always picking the largest treebank.",
"This research is supported by Science Foundation Ireland through the ADAPT Centre for Digital Content Technology, which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.",
"We thank the reviewers for their inspiring questions and detailed feedback."
] | [
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Wet laboratory protocols (WLPs) are critical for conveying reproducible procedures in biological research.",
"They are composed of instructions written in natural language describing the step-wise processing of materials by specific actions.",
"This process flow description for reagents and materials synthesis in WLPs can be captured by material state transfer graphs (MSTGs), which encode global temporal and causal relationships between actions.",
"Here, we propose methods to automatically generate a MSTG for a given protocol by extracting all action relationships across multiple sentences.",
"We also note that previous corpora and methods focused primarily on local intra-sentence relationships between actions and entities and did not address two critical issues:",
"(i) resolution of implicit arguments and",
"(ii) establishing long-range dependencies across sentences.",
"We propose a new model that incrementally learns latent structures and is better suited to resolving inter-sentence relations and implicit arguments.",
"This model draws upon a new corpus WLP-MSTG which was created by extending annotations in the WLP corpora for inter-sentence relations and implicit arguments.",
"Our model achieves an F1 score of 54.53% for temporal and causal relations in protocols from our corpus, which is a significant improvement over previous models Dy-GIE++:28.17%; spERT:27.81%.",
"We make our annotated WLP-MSTG corpus available to the research community.",
"1 1 Introduction Wet laboratory protocols (WLPs) play an integral role in bioscience and biomedical research by serving as a vehicle to communicate experimental instructions that allow for standardization and replication of experiments.",
"These procedures, typically written in natural language, prescribe actions (Fig-ure 1) to be conducted on materials that generally 1 The dataset and code is available on the authors' websites Isolation of temperate phages by plaque agar overlay 1. Grow the bacteria overnight.",
"produce new materials which, in turn, are used by future actions to make newer materials.",
"However, WLPs can be unclear, composed of disconnected and distant parts, and built upon implicit information that were referenced earlier or omitted entirely.",
"Lack of careful documentation has led to a reproducibility crisis (Baker, 2016) in the biosciences and also poses considerable challenges for automation of laboratory procedures: gleaning the effect and semantics of actions requires understanding the underlying experiment, the sentence structure and rationale behind implicitly stated arguments.",
"Currently, there is a dearth of annotated resources for natural language instructions in laboratory protocols.",
"The WLP corpus initially collected by Kulkarni et al. (2018) and later updated by Tabassum et al. (2020) focused solely on relations within sentences.",
"However, actions in WLPs are more complex, containing additional relations between actions (e.g., temporal and causal rela-tions).",
"We propose using material state transfer graphs (MSTG), which are a natural extension of Action Graphs (Kulkarni et al., 2018).",
"MSTGs link together several Action Graphs into a larger structure by utilizing global temporal and causal relationships that can span several sentences in order to describe the flow of materials from action to action (Section 3).",
"An example of a MSTG is shown in Figure 1. The action phrase Grow the bacteria overnight in Step 1 consists of an action Grow that Acts-on the reagent bacteria for an amount of time specified as overnight .",
"This Action Graph is then connected to other such graphs (like in Step 5 ) through temporal and causal relationships (e.g., Grow action's product is host culture thus we use a Product link to establish a temporal relation between Step 1 and Step 5 ).",
"To automate the generation of MSTGs, we must overcome two distinct challenges prevalent in WLPs.",
"First, the result of a preceding step may not be immediately used by the next step, resulting in long-range dependencies.",
"Second, an action may involve implicit information , which is either mentioned earlier or omitted entirely.",
"Current models usually fail to make accurate predictions for long-range relations, as seen in Figure 1 when establishing a temporal relation between Step 1 and Step 5 .",
"These methods rely on relation propagation (DyGIE++ Wadden et al. (2019)) or use contextual embeddings (spERT Eberts and Ulges (2019)).",
"Furthermore, neither successfully establish complex relations involving implicit arguments.",
"In Step 5 , the host culture and viral concentrate must be added to the tube containing soft agar that was removed in Step 4 .",
"However, the location tube in Step 5 is implicit and has to be correctly inferred to make the Site relation between Remove and Add .",
"We propose a novel and effective neural network model that:",
"(i): uses a series of relational convolutions to learn from relations within and across multiple action phrases and",
"(ii): iteratively enriches entity representations with learned latent structures using a multi-head R-GCN model.",
"Our model achieves an F1 score of 54 .",
"5% for temporal and causal relations, significantly improving upon previous methods DyGIE++ and spERT for such long-range relations by 26 .",
"4% and 26 .",
"7% respectively.",
"We analyze our model for intraand inter-sentence relation extraction and show substantial improvements.",
"Further, we also show the model's ability in resolving implicit arguments to improve temporal relation extraction over the best baseline method by 23 .",
"3% .",
"This paper is organized around two main contributions:",
"(i): the WLP-MSTG Corpus that extends the WLP Corpus (Kulkarni et al., 2018) by including intraand cross-sentence temporal and causal relationships and",
"(ii): a novel model that builds upon latent structures to resolve implicit arguments and long-range relations spanning multiple sentences.",
"In Section 2, we describe related works and in Section 3, we introduce MSTGs highlighting the two challenges.",
"Next, we describe our proposed model in Section 4 and demonstrate its performance in Section 5. 2 Related Work Temporal and Causal Relation Extraction: Prior efforts have shown great promise in learning local and global features (Leeuwenberg and Moens, 2017; Ning et al., 2017).",
"Neural-network-based methods have proven effective (Meng et al., 2017; Meng and Rumshisky, 2018).",
"Notably, Han et al. (2019) use neural support vector machine which can be difficult to train.",
"Early methods for extracting causal relations resorted to feature engineering (Bethard and Martin, 2008; Yang and Mao, 2014).",
"Recently several researchers (Zeng et al., 2014; Nguyen and Grishman, 2015; Santos et al., 2015) used convolutional neural networks (CNNs) for extracting causal features.",
"Notably, Li and Mao (2019) addressed scarcity of training data thorough knowledge-based CNN.",
"However, such methods are not scalable to multiple sentences.",
"Cross Sentence Relation Extraction: Long range relations are understudied in literature.",
"Prior work focused on relations within a sentence or at best between pairs of sentences (Peng et al., 2017; Lee et al., 2018; Song et al., 2018; Guo et al., 2019).",
"In addition to joint entity and relation extraction models, Wadden et al. (2019) proposed a model that passes useful information across graphs over cross-sentence contexts while Eberts and Ulges (2019) encoded per sentence contextual information for relation extraction over longer sentences.",
"Implicit Arguments: Early methods selected specific features to build linear classifiers (Ger-ber and Chai, 2010, 2012).",
"Others incorporated additional, manually-constructed resources like named entity taggers and WordNet (Gerber and Chai, 2012; Laparra and Rigau, 2013; Fellbaum, 2012).",
"In contrast, a few notable studies used unlabeled training data to resolve implicit arguments (Chiarcos and Schenk, 2015; Schenk et al., 2016).",
"Finally, Do et al. (2017) explored the full probability space of semantic arguments; however, the method does not scale well.",
"To construct a MSTG from an input protocol, we define the following four concepts.",
"(i) Action Graphs: Introduced by Kulkarni et al. (2018), they are extracted from action phrases as seen in Figure 1. Forming the fundamental unit of a MSTG, Action Graphs are composed of an Action , 17 types of named entities as explicit arguments (e.g, Reagent, Location, etc.), and 13 local semantic relations (e.g., Using, Measure, Acts-on, Data Split #Docs #Entities #iAP #cAP-TaC Train 387 34 , 355 32 , 585 5 , 049 Dev 99 13 , 713 12 , 578 2 , 209 Test 128 16 , 869 15 , 679 2 , 724 Total 615 64 , 937 60 , 842 9 , 982 Table 1: Statistics of the Wet Lab Protocol-Material State Transfer Graph Corpus extended with cross Action Phrase Temporal and Causal relationships. etc.) represented as directed edges, which we shall refer to as inter-Action Phrase (iAP) relations hereafter.",
"(ii) Temporal Relations: Inspired from prior work (Allen, 1984), we define temporality as a relationship between two action phrases such that an action's product (output) is connected to another action's source (input), thereby imposing a partial or total order.",
"It is also necessary to determine whether an action is executed before or simultaneously with respect to other actions.",
"We use 5 temporal relations, (namely Acts-on, Site, Coref-erence, Product, and Overlaps) to capture the flow of materials.",
"(iii) Causal Relations: Following (Barbey and Wolff, 2007), we define causality as the relationship between two actions where one action directly affects the execution of another action (e.g., if a given action enables or prevents 2 another action).",
"(iv) Implicit Arguments: We characterize implicit arguments into four cases (Figure 2a) depending on whether the source or product of the connected actions is implicit or explicit.",
"Four of the five temporal relations in WLP-MSTG are defined to handle implicit arguments: Acts-on, Site, Coreference, and Product.",
"Annotation Process: We annotate six-hundred-and-fifteen ( 615 ) protocols derived from the WLP Corpus to include the 6 global cross-Action Phrase Temporal and Causal (cAP-TaC) relationships.",
"We split the annotation task into two phases.",
"In the first phase, we worked with 7 expert annotators to develop the guidelines over 8 iterations.",
"Each iteration consisted of 10 protocols that were individually annotated by each expert annotator, and the inter-annotator agreement (IAA) was measured for each of the 10 protocols.",
"At the end of each iter-2 Due to the limited instances of Prevents relations found in WLPs, we replace these with the relation Enables.",
"E.g., Mix regents carefully to not spill contents , implies a Prevents relation from Mix to spill which is equivalent to an Enables relation from Mix to not spill .",
"ation, we refined the set of rules to reduce the guide-lines' ambiguity.",
"The agreement measured across all annotators using Krippendorff's Alpha (Krip-pendorff, 2004) on the last iteration was 78 .",
"23% .",
"With a good IAA attained, we began the second phase to collect the train, dev, and test datasets.",
"To ensure the highest quality of the test data, we employed all 7 annotators to work on the same 128 protocols and merged the resulting annotations based on majority voting.",
"In contrast, individual annotators collected the train and dev sets separately to speed up the annotation process.",
"A typical protocol of 30 steps required 25 minutes on average for an annotator to identify all the cAP-TaC relations.",
"Comparison with previous corpora: Our corpus, WLP-MSTG, extends the WLP corpus (Kulka-rni et al., 2018) which was later updated for a WNUT 2020 shared task (Tabassum et al., 2020).",
"WNUT 2020 was primarily designed to facilitate supervised named entity taggers and within-sentence relation extraction methods.",
"We extend the 615 protocols therein to include intraand inter-sentence temporal and causal relations.",
"To ensure a fully connected graph, we exclude entities and relations annotated for spurious descriptive sentences that do not prescribe any actions (e.g., title, notations, definitions, etc.).",
"Table 2 provides a comparison of statistics among the three corpora.",
"Analysis: We conducted a distribution analysis of 90 protocols that would typically serve as the dev set for machine learning models.",
"Actions connected by temporal and causal relations tend to be consecutive ( 78 . 4% ); however, a non-trivial number are considerably spaced apart ( 21 . 6% ) with 1 .",
"08% of the total at least 8 actions apart.",
"For implicit arguments, we observed:",
"(i) implicit arguments are unusually prevalent in WLPs ( 88 . 44% ),",
"(ii) a higher percentage ( 55 . 98% ) of the products of an action are implied, and",
"(iii) temporally connected actions are closer if they contain implicit arguments; otherwise, they are relatively farther apart Figure 2b.",
"This analysis provides valuable insight about the challenges in the form of long-range relations and implicit arguments that are present in extracting MSTGs from WLPs.",
"We develop a latent structure model for jointly learning entity and relations within and across multiple sentences.",
"A schematic of the model is shown in Figure 3. In Section 4.1 we describe construction of span representation (Figure 3A) from protocol text that incorporates critical features necessary for long-range relation extraction.",
"Section 4.2 explains how the transcoder block (Figure 3B) builds upon latent structures (as illustrated in Figure 3D) to improve entity and relation representations.",
"Finally, in Section 4.3 we discuss training and regularization strategies to jointly learn span, entity, and relations through a multi-task loss function derived from span, entity, and relation scores (Figure 3C).",
"We shall use Figure 1 as a running example throughout the model description.",
"Following prior span-based approaches (Wadden et al., 2019; Eberts and Ulges, 2019), our goal is to",
"(i): collect a series of tokens from the protocol text,",
"(ii): enumerate all spans, and",
"(iii): rank top-scoring spans for considerations as candidates for entity and relation extraction.",
"Token embeddings: We use SciBERT (Beltagy et al., 2019) for learning token representations for a given protocol P .",
"As shown in Figure 3, the input is a protocol P represented as a collection of sentences S = { s 1 , ..., s P } .",
"Each sentence s i is composed of a sequence of tokens { t 1 , ..., t n } .",
"For example, within the sentence, Add 1.0 mL host culture and either 1.0 or 0.1 mL viral concentrate (Figure 1, Step 5), we identify host , culture , and etc., as the tokens to be passed to the SciBERT model.",
"We batch process sentences in the protocol to generate context-aware embeddings { t 1 , ..., t n } for each sentence.",
"Span Enumeration: The spans between two tokens t i and t j is represented as s ij = { t i , t i +1 , ...t j } .",
"We enumerate all possible spans of upto a size of 10 tokens.",
"For each enumerated span, the span representation e ij R d e is derived from RelationEncoder Add + Norm RelationDecoder Add + Norm Add + Norm 2-Layer Convolutions LaplacianSmoothing SciBERT (fine-tuned) 1. Grow the bacteria overnight",
"where, t i and t j are the first and last token representation.",
"Note, sh ( s ij ) is a soft head representation (Bahdanau et al., 2014) and, w ( s ij ) is a learnt span width embedding respectively.",
"Further, pos ( s ij ) and step ( s ij ) are two positional embeddings, the former for within sentence while the latter defines the step position within the protocol respectively.",
"Hence, host culture and host culture and are two valid spans that are enumerated through this process.",
"Span Pruning: Next, low scoring spans are filtered out during both training and evaluation phases.",
"Following (Lee et al., 2017), the scoring function is implemented as a feed-forward network s ( e ij ) = w Ts FFNN s ( e ij ) .",
"We rank and pick a number of top scoring spans per sentence by using a combination of",
"(i): a maximum fraction p = 0 .",
"1 of spans per sentence, and",
"(ii): a minimum score threshold t = 0 .",
"5 .",
"Thus, the span host culture receives a significantly higher score than host culture and , indicating that the former is the correct reagent entity in the prescribed step.",
"These span candidates are then passed to the transcoder block.",
"In the transcoder block, we propose a novel architecture to improve relation and entity representation from latent structures.",
"The objective is two fold:",
"(i): to leverage localized features at phrase and sentence levels to resolve long range relations through a relation convolutions , and",
"(ii): to learn from latent structures how to resolve implicit arguments through a multi-head relational graph convolution network (multi-head R-GCN).",
"Each transcoder block is composed of a Relation Encoder (Section 4.2.1), Convolution (Sec-tion 4.2.2) and Decoder (Section 4.2.3) components, to discover local relationships between the input entities.",
"These relations (represented as latent structures A R m m r ) are then passed to the Multi-Head R-GCN (Section 4.2.4) component of the same transcoder block to enrich the entity representation with information about those discovered local relationships.",
"These enriched entities can now be used to predict more complex cross sentence relationships in the next transcoder block.",
"To facilitate deeper networks, we make use of residual connections (He et al., 2016) followed by layer normalization (Ba et al., 2016) as denoted by Add + Norm in Figure 3B.",
"We shall make use of the example (Figure 1), focusing on the long range relationships between Step 1 (i.e., Grow the bacteria overnight. ) and Step 5 (i.e., Add 1.0 mL host culture and either 1.0 or 0.1 mL viral concentrate. ) to illustrate the flow of information throughout the transcoder block.",
"The first transcoder block takes as input m high scoring candidate entity span representations (as E (0) R m d e ) as determined by the pruner 3 .",
"For instance, from Step 1 we identify the following high scoring candidate entities grow , bacteria , and overnight and from Step 5 we find add , 1.0 mL , host culture , 0.1 mL , and viral concentrate .",
"Following (Nguyen and Verspoor, 2019), we make use of a bi-affine pairwise function to encode relations for every pair of entity span representation.",
"That is, we generate relational embeddings for entity pairs like grow and bacteria , grow and overnight , etc.",
"Each entity span e ij R d e is first projected using two FFNNs to generate the representations e hij R d h and e tij R d t indicating the first (head) and the second (tail) argument of a relation: e hij = FFNN h ( e ij ); e tij = FFNN t ( e ij ) In practice, we batch process all entities to generate E h R m d h and E t R m d t where m is the number of candidate spans.",
"In our experiments, we let d h = d t then use a bi-affine operator to calculate a tensor (cid:101) R ( l ) R m d r m for relational embeddings: (cid:101) R ( l ) = ( E h L ) E Tt .",
"Here L R d h d r d t is a learned parameter tensor and d r is the relation embedding size.",
"We enrich the relational embeddings (cid:101) R ( l ) with local relational features within a single phrase (found near the diagonal) and across multiple phrases (found in the upper and lower triangle) using a stack of convolutional layers.",
"We denote C w ( . ) to be a 2 D convolutional operator applying a kernel width of size w w .",
"In our model, we make use of a two-layer convolution: T (0) = ReLU ( C 3 ( (cid:101) R ( l ) )) R ( l ) = ReLU ( C 3 ( T (0) )) The input (cid:101) R ( l ) is reshaped as R m m d r such that the dimensions d r acts as the channel dimension in the convolutions.",
"The dimensions of T (0) is in R m m 2 d r with the final output R ( l ) R m m d r .",
"3 The entity span representation from the entire sub-protocol, (i.e., from steps 1 to 5), are passed as a bag of entities E (0) R m d e .",
"However, there aren't any relations (i.e., R (0) ) to be passed to the first transcoder block 4.2.3 Relation Decoder: The relational embeddings R ( l ) are decoded using a 2-layer FFNN.",
"The decoded scores A R m m r captures the latent structures (as shown in Figure 3B).",
"This is re-encoded using the multi-head R-GCN to strengthen the model's ability to predict more complex relations in the next transcoder layer.",
"For each predicted relation score A r R m m , we add self loops and perform Laplacian smoothing (Kipf and Welling, 2017; Li et al., 2018) for normalization following: A r = (cid:101) D 12 (cid:101) A r (cid:101) D 12 where (cid:101) A r = A r + I and D = (cid:80) j (cid:101) A ijr .",
"Then, using A r as an adjacency matrix, we learn multi-head, direction-specific graph convolution transformations.",
"Each head corresponding to a given relation r performs graph convolutions on the entity representation E ( l 1) R m d e to generate E ( l ) r R m ( d r /r ) .",
"A single R-GCN ( i ) r ( . ) (Schlichtkrull et al., 2018) operation for a given relation type r and i th GCN layer corresponds to: R-GCN ( i ) r ( A r , E ( i 1) r ) = ( A r E ( i 1) r W ( i ) fr ) + ( A Tr E ( i 1) r W ( i ) br ) + b ( i ) r (2) where W ( i ) fr R d i 1 d i , W ( i ) br R d i 1 d i are learnable parameters for incoming and outgoing edge directions respectively and b ( i ) r is the bias.",
"We use the ReLU activation function in our networks.",
"As shown in Figure 3B, the outputs of the individual R-GCN heads are concatenated and passed through a FFNN layer to compute the final output E ( l ) .",
"For instance, suppose we discovered a local relation in Step 1 between grow and bacteria after the Relation Decoder component in the first transcoder block.",
"The Multi-head R-GCN takes in the discovered relation (through the latent structure A ) and enriches grow 's entity embeddings, enabling the next transcoder layer to predict a more complex cross sentence relation between grow (Step 1) and host culture (Step 5).",
"Since bacteria and host culture are semantically related, they have similar entity embeddings, and therefore the enriched representation of grow (now containing information about bacteria ) allows for establishing the relation between grow and host culture in the next transcoder block.",
"The loss function is a linear combination of cross entropy losses for each of the tasks.",
"We additionally apply label smoothing (Szegedy et al., 2016).",
"The relation extraction is trained on gold entity spans.",
"For regularization, we apply dropout (Sri-vastava et al., 2014) to the output of each FFNN layer.",
"We make use of dropedge (Rong et al., 2019) for the adjacency matrix A r before it is passed to the multi-head R-GCN model.",
"In contrast to general language models, domain-specific methods have resulted in more competitive baselines and are better suited (Tabassum et al., 2020; Wadden et al., 2019; Eberts and Ulges, 2019) for simultaneously resolving and predicting entities and relations over longer contexts.",
"Thus, we evaluate our model against two state-of-the-art models for jointly predicting entities and relations in scientific-text domain, namely DyGIE++ (Wadden et al., 2019) and spERT (Eberts and Ulges, 2019), on the WLP-MSTG.",
"We conduct five ( 5 ) runs with random initializations for each evaluation and report the test set performance on the model that achieved the me-dian relation F1 score on the dev(elopment) set.",
"All models are evaluated end-to-end , where the model takes as input tokenized sentences and predicts all the entities and the relations generating a MSTG.",
"We use the standard precision, recall and F1 metrics.",
"An entity is considered correct if its predicted span and label match the ground truth.",
"Relation extraction is performed on the predicted entity spans.",
"A relation is correct if its relation type and the entity pairs are both correct (in span and type) against the ground truth.",
"We also evaluate our model's performance on WNUT 2020 (Tabas-sum et al., 2020) corpus.",
"To fairly evaluate relation extraction, we use gold entities to make relation predictions 4 by modifying the loss function to only train on relation scores.",
"We additionally concatenate entity label embeddings to the span representation in Equation (1).",
"On the WLP-MSTG corpus, Table 3 shows our best model with N = 8 transcoder block layers making",
"4 The best models on WNUT2020 make direct use of gold entities during the training and inference and only focus on relation extraction task.",
"modest improvement on entity extraction at 82 .",
"0% but improving significantly upon the previous state-of-the-art methods (i.e. DyGIE++ and spERT) in predicting relations.",
"Our model outperforms the baselines for relation extraction with an F1 score on predicting inter-Action Phrase (iAP) relations at 68 .",
"0% and cross-Action Phrase Temporal and Causal (cAP-TaC) relations at 54 .",
"5% .",
"We further enhanced the performance of our model by sharing the relational decoders' parameters across all layers of the transcoder block (Section 4.2.3).",
"This enables the latent structures to be grounded in output relation types, which also lends itself to be interpretable.",
"The shared relation decoder marginally outperforms the not-shared configuration by 0 .",
"5% for iAP relations and 1 .",
"1% for cAP-TaC relations.",
"Short and Long Range Relations: On the WNUT 2020 corpus, which only includes intra-sentence relations, Table 4 shows that our model outperforms the best single model that used the original data by 1 .",
"0% .",
"We also report that our model is competitive against the ensemble approach that included models trained on an altered version of the original corpus where they removed duplicate text after clustering.",
"On the WLP-MSTG corpus, we can evaluate both short and long range relations: from Table 3 we see a 3 .",
"5% improvement in F1 score over DyGIE++ for iAP relations.",
"This shows that our model leverages the cross-sentence temporal and causal relations that were additionally annotated in WLP-MSTG to improve local iAP relations.",
"Our model outperforms DyGIE++ and spERT on intra-sentence by 4 .",
"3% and 26 .",
"1% respectively, and significantly improves for inter-sentence cAP-TaC relations by 45 .",
"5% and 21 .",
"5% respectively.",
"This is attributed to positional embeddings along with the relational convolutions which enables the model to learn intra and inter action phrase relations effectively.",
"We see spERT performing better for Overlaps which is largely attributed to the 'CLS' token that spERT embeds to make relation predictions.",
"Figure 4 shows performance on varying the number of sentences in between entities involved in a relation.",
"We observe our model performing the best for all distances between sentences.",
"This is once again attributed to the relational convolution component which is effective in capturing far away relations.",
"temporal relations at 53 .",
"4% F1 score.",
"We also observe significant improvements across the board for resolving implicit arguments.",
"We see the highest gains (at 55 . 6% ) compared to the baseline models ( 1 . 6% for DyGIE++ and 10 . 2% for spERT) for (E-I) case (Figure 2a) which only contains 169 samples in the test set.",
"Our model is able to correctly resolve the implicit source (input) to an action by utilizing simple relations that is typically connected to explicit arguments.",
"Causal relations: The performance for causal relations for our model against DyGIE++ is comparable as seen in Table 6. Causal relations are relatively easier for the baseline models to capture, as they tend to have specific prepositions in be-cAP-TaC Relations DyGIE++ spERT Ours Acts-on 62 .",
"tween action phrases.",
"5 However, more complex causal relations are hard.",
"Still, our model is able to deal with such examples, presenting about 0 .",
"7% performance gain compared to DyGIE++ and about 10 .",
"9% improvement against spERT.",
"This is primarily attributed to the multi-head R-GCN which builds upon simple relations that provide clues to establish harder causal relations.",
"Cross-sentential ' Enables ' relations (as seen in Table 5) are challenging even for our model as once again we do not encode any contextual features.",
"Model Ablation: Table 7 presents the results of the ablation test of our model on the development set of WLP-MSTG.",
"All three components (i.e., positional embeddings, relation convolutions and 5 For instance, in Step Resuspend by vortexing the pellets baseline models can easily identify an Enables relation from vortexing to Resuspend with the help of the preposition ' by '.",
"multi-head R-GCN) play a significant role in improving cAP-TaC performance.",
"Relation convolutions contributes the most to iAP and cAP-TaC relations by about 1 .",
"2% and 2 .",
"4% respectively.",
"Positional embeddings impacts iAP relations more (by 1 . 1% ) whereas Multi-Head R-GCN only impacts the more complex relations (cAPTaC by 1 . 1% ) and does not help in improving simpler relations.",
"How Many",
"Layers?: Figure 5 shows that more layers generally improve far away relations without improving closer ones.",
"This shows that although our model can build upon simple relations that are typically close by, it cannot do the opposite,",
"i.e., Model All iAP cAP Final Model 59.5 64.3 47.5 Pos + Step Embedding 58 .",
"leverage far away relations (which are typically more complicated) to improve more challenging closer relations.",
"Our model discovers those complex, distant relations too deep into the network to be utilized to predict the challenging local relations.",
"We present the WLP-MSTG corpus, an extension of the WLP corpus that includes cAP-TaC relationships for building MSTGs.",
"This corpus highlights two unique challenges:",
"(i) the implicit argument problem and",
"(ii) long-range relations.",
"To address these issues, our model builds upon latent structures thus outperforming previous state-of-the-art models for predicting iAP and cAP-TaC relations.",
"We also report significant improvements in understanding implicit arguments and identifying long range relationships across multiple sentences.",
"However, our model's lower absolute performance indicates that we have not fully captured the information needed to facilitate modeling end-to-end workflows, which will have a lasting impact in improving automation in the life sciences and other domains."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result"
] |
[
"Text segmentation aims to uncover latent structure by dividing text from a document into coherent sections.",
"Where previous work on text segmentation considers the tasks of document segmentation and segment labeling separately, we show that the tasks contain complementary information and are best addressed jointly.",
"We introduce the Segment Pooling LSTM (S-LSTM) model, which is capable of jointly segmenting a document and labeling segments.",
"In support of joint training, we develop a method for teaching the model to recover from errors by aligning the predicted and ground truth segments.",
"We show that S-LSTM reduces segmentation error by 30% on average, while also improving segment labeling.",
"A well-written document is rich not only in content but also in structure.",
"One type of structure is the grouping of content into topically coherent segments.",
"These segmented documents have many uses across various domains and downstream tasks.",
"Segmentation can, for example, be used to convert unstructured medical dictations into clinical reports (Sadoughi et al., 2018), which in turn can help with medical coding (since a diagnosis mentioned in a \"Medical History\" might be different from a diagnosis mentioned in an \"Intake\" section (Ganesan and Subotin, 2014)).",
"Segmentation can also be used downstream for retrieval (Hearst and Plaunt, 2002; Edinger et al., 2017; Allan et al., 1998), where it can be particularly useful when applied to informal text or speech that lacks explicit segment markup.",
"Topically segmented documents are also useful for pre-reading (the process of skimming or surveying a text prior to careful reading), thus serving as an aid for reading comprehension (Swaffar et al., 1991; Ajideh, 2003).",
"Uncovering latent, topically coherent segments of text is a difficult problem because it requires solving a chicken-and-egg problem: determining the segment topics is easier if segment boundaries are given, and identifying the boundaries of segments is easier if the topic(s) addressed in parts of the document are known.",
"Prior approaches to text segmentation can largely be split into two categories that break the cycle by sequentially solving the two problems: those that attempt to directly predict segment bounds (Koshorek et al., 2018), and those that attempt to predict topics per passage (e.g., per sentence) and use measures of coherence for post hoc segmentation (Hearst, 1997; Arnold et al.; Eisenstein and Barzilay, 2008; Riedl and Biemann, 2012; Glava et al., 2016).",
"The benefit of the topic modeling approach is that it can work in unsupervised settings where collecting ground truth segmentations is difficult and labeled data is scarce (Eisenstein and Barzilay, 2008; Choi, 2000).",
"Recent work uses Wikipedia as a source of segmentation labels by eliding the segment bounds of a Wikipedia article to train supervised models (Koshorek et al., 2018; Arnold et al.).",
"This enables models to directly learn to predict segment bounds or to learn sentence-level topics and perform post hoc segmentation.",
"Our work is motivated by the observation that the segment bounds and topicality are tightly interwoven, and should ideally be considered jointly rather than sequentially.",
"We start by examining three properties about text segmentation: (1) segment bounds and segment labels contain complementary supervisory signals, (2) segment labels are a product of lower level (e.g. sentence) labels which must be composed, and (3) the model should not only learn to label from ground-truth segmentations at training time, but instead the labeler should learn to be robust to segmentation errors.",
"These properties build on previous work discussed in Section 2. We experimentally evaluate and verify each of these properties in Section 5 with respect to a document segmentation and segment labeling task.",
"Taking advantage of these properties, we propose a neural model that jointly segments and labels without committing to a priori segmentations, Segment Pooling LSTM (S-LSTM).",
"It consists of three components: a segment proposal LSTM (dis-cussed in Section 3.2), a segment pooling layer (Section 3.3), and a segment aligner for training and evaluation (Section 3.4).",
"Our main contribution is a model that performs segmentation and labeling jointly rather than separately.",
"By virtue of joint inference, our model takes advantage of the complementary supervisory signals for segmentation and topic inference, considers the contribution of all sentences to the segment label, and avoids committing to early errors in low-level inference.",
"Our approach improves over neural and nonneural baselines of a document segmentation task.",
"We use a dataset of Wikipedia articles described in Section 5 for training and evaluation.",
"We show that S-LSTM is capable of reducing segmentation error by, on average, 30% while also improving segment classification.",
"We also show that these improvements hold on out-of-domain datasets.",
"Coherence-based Segmentation.",
"Much work on text segmentation uses measures of coherence to find topic shifts in documents.",
"Hearst (1997) introduced the TextTiling algorithm, which uses term co-occurrences to find coherent segments in a document.",
"Eisenstein and Barzilay (2008) introduced BayesSeg, a Bayesian method that can incorporate other features such as cue phrases.",
"Riedl and Biemann (2012) later introduced TopicTiling, which uses coherence shifts in topic vectors to find segment bounds.",
"Glava et al. (2016) proposed GraphSeg, which constructs a semantic relatedness graph over the document using lexical features and word embeddings, and segments using cliques.",
"Nguyen et al. (2012) proposed SITS, a model for topic segmentation in dialogues that incorporates a per-speaker likelihood to change topics.",
"While the above models are unsupservised, Arnold et al. introduced a supervised method to compute sentence-level topic vectors using Wikipedia articles.",
"The authors created the WikiSection dataset and proposed the SECTOR neural model.",
"The SECTOR model predicts a label for each sentence, and then performs post hoc segmentation looking at the coherence of the latent sentence representations, addressing segmentation and labeling separately.",
"We propose a model capable of jointly learning segmentation boundaries and segment-level labels at training time.",
"Our segmentation does not rely on measures of coherence, and can instead learn from signals in the data, such as cue phrases, to predict segment bounds, while still performing well at the segment labeling task.",
"Supervised Segmentation.",
"An alternative to using measures of topical coherence to segment text is to learn to directly predict segment bounds from labeled data.",
"This was the approach taken in Koshorek et al. (2018), where the authors used Wikipedia as a source of training data to learn text segmentation as a supervised task.",
"However, learning only to predict segment bounds does not necessarily capture the topicality of a segment that is useful for informative labeling.",
"The task of document segmentation and labeling is well-studied in the clinical domain, where both segmenting and learning segment labels are important tasks.",
"Pomares-Quimbaya et al. (2019) provide a current overview of work on clinical segmentation.",
"Ganesan and Subotin (2014) trained a logistic regression model on a clinical segmentation task, though they did not consider the task of segment labeling.",
"Tepper et al. (2012) considered both tasks of segmentation and segment labeling, and proposed a two-step pipelined method that first segments and then classifies the segments.",
"Our proposed model is trained jointly on both the segmentation and segment labeling tasks.",
"Concurrent work considers the task of document outline generation (Zhang et al., 2019).",
"The goal of outline generation is to segment and generate (po-tentially hierarchical) headings for each segment.",
"The authors propose the HiStGen model, a hierarchical LSTM model with a sequence decoder.",
"The work offers an alternative view of the joint segmentation and labeling problem, and is evaluated using exact match for segmentation and ROUGE (Lin, 2004) for heading generation if the segment is predicted correctly.",
"In contrast, we evaluate our models using a commonly-used probabilistic segmentation measure, Pk, which assigns partial credit to incorrect segmentations (Beeferman et al., 1999).",
"We also use an alignment technique to assign partial credit to labels of incorrect segmentations, both for training and evaluation.",
"IOB Tagging.",
"The problem of jointly learning to segment and classify is well-studied in NLP, though largely at a lower level, with Inside-Outside-Beginning (IOB) tagging (Ramshaw and Marcus, 1999).",
"Conditional random field (CRF) decoding has long been used with IOB tagging to simultaneously segment and label text, e.g. for named entity recognition (NER, McCallum and Li, 2003).",
"The models that perform best at joint segmenta-tion/classification tasks like NER or phrase chunking were IOB tagging models, typically LSTMs with a CRF decoder (Lample et al., 2016) until BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018).",
"Tepper et al. (2012) proposed the use of IOB tagging to segment and label clinical documents, but argued for a pipelined approach.",
"CRF-decoded IOB tagging models are more difficult to apply to the multilabel case.",
"Segment bounds need to be consistent across all labels, so modeling the full transition from | L | | L | (where | L | is the size of the label space) at every time step is computationally expensive.",
"In contrast, our joint model performs well at multilabel prediction, while also outperforming a neural CRF-decoded model on a single -label labeling task.",
"In order to jointly model document segmentation and segment classification, we introduce the Segment Pooling LSTM (S-LSTM) model.",
"S-LSTM is a supervised model trained to both predict segment bounds and pool over and classify the segments.",
"The model consists of three components: a sentence encoder (Section 3.1), a segment predictor LSTM (Section 3.2), and a segment pooling network which pools over predicted segments to classify them (Section 3.3).",
"The segment predictor is allowed to make mistakes that the labeler must learn to be robust to, a process which we refer to as exploration, and accomplish by aligning predicted and ground truth segments (Section 3.4).",
"The full architecture is presented in Figure 1, and the loss is discussed in Section 3.5.",
"The first stage is encoding sentences.",
"S-LSTM is agnostic to the choice of sentence encoder, though in this work we use a concat pooled bi-directional LSTM (Howard and Ruder, 2018).",
"First, the embedded words are passed through the LSTM encoder.",
"Then, the maximum and mean of all hidden states are concatenated with the final hidden states, and this is used as the sentence encoding.",
"The second step of our model is a Segment Predictor LSTM, which predicts segment boundaries within the document.",
"For this step we use a bidirectional LSTM that consumes each sentence vector and predicts an indicator variable, (B)eginning or (I)nside a segment.",
"It is trained from pre-segmented documents using a binary cross entropy loss.",
"This indicator variable determines if the sentence is the start of a new segment or not.",
"This is similar to the approach taken by TextSeg in Koshorek et al. (2018), though we do not estimate a threshold, , and instead learn to to predict two classes: (B)eginning and (I)nside.",
"After segmenting the document, the third stage of the model pools within the predicted segments to predict a label for each segment.",
"The sentence vectors for the predicted segments are all grouped, and a pooling function is run over them.",
"There are several possible sequence-to-vector pooling functions that could be used, such as averaging, and more complex learned pooling functions, such as LSTMs.",
"The full S-LSTM model uses a concat pooling LSTM, and our experimental results show that this yields a better segment label than just averaging.",
"We then use a classifier following the output of the segment pooler, which can provide a distribution over labels for each segment.",
"The combination of segment prediction and pooling is one way that S-LSTM is different from previous hierarchical LSTM models.",
"The model can predict and label segments dynamically, generating a single vector for predicted segments.",
"Because segments can be considered dynamically at training time, we propose a method of assigning labels to potentially incorrect segments by aligning the predicted segments with ground truth segments.",
"This label assignment allows segment-labeling loss to be propagated through the end-to-end model.",
"opposed to model predictions, was first developed in Williams and Zipser (1989).",
"The idea is to use ground truth predictions for inputs that would normally come from model predictions for the first stages of training, to help with convergence.",
"For S-LSTM, it is the simplest approach to segment pooling and alignment: at training time feed the ground truth segments (as opposed to the predicted segments) the segment pooler (step 3 in Figure 1).",
"This gives us a one-to-one alignment of \"predicted\" (forced) segments and ground truth segments.",
"This is opposed to only using the predicted segments as the bounds for segment pooler.",
"Exploration.",
"Employing only teacher forcing does not allow the segment labeler to learn how to recover from errors in segmentation.",
"The mechanism for allowing the model to explore incorrect segmentations is to align the predicted segments with overlapping ground truth segments at training time, and treat the all aligned ground truth labels as correct.",
"While many alignments are possible, we use the one presented in Figure 2. This many-to-many alignment ensures that every ground-truth segment is mapped to at least one predicted segment and every predicted segment is mapped to at least one ground truth segment.",
"We can additionally schedule teacher forcing.",
"At the beginning, when the segmentation prediction network performs poorly, the model pools over only ground truth segment bounds, allowing it to learn the cleanest topic representations.",
"However, as training progresses and the segmentation accuracy begins to converge, we switch from pooling over ground truth segments to aligning predicted and ground truth segment.",
"In this way, the segment pooler learns to be robust to segmentation errors.",
"To jointly train the model, we use a multi-task loss,",
"where y seg are the labels for the segment prediction LSTM and y cls are segment labels.",
"In addition, we pass in an aligner , which determines how to align the predicted segments with the ground truth segments to compute the loss, and either teacher forces the model or allows it to explore.",
"We follow the experimental procedure of Arnold et al. to evaluate S-LSTM for the tasks of document segmentation and segment labeling.",
"WikiSection .",
"Arnold et al. introduced the WikiSection dataset, which contains Wikipedia articles across two languages (English and German) and domains (Cities and Diseases).",
"Articles are segmented using the Wikipedia section structure.",
"The heading of each segment is retained, as well as a normalized label for each heading type (e.g. History, Demography), drawn from a restricted label vocabulary.",
"There are two tasks: (1) jointly segment the document and assign a single restricted-vocabulary label to the segment, and (2) predict the bag-of-words in the title of the Wikipedia section as a label.",
"For instance, the bag-of-words label for the title of this section would be the words: History History History Politics Geography Economy Politics Geography Economy 1. Align all ground truth segments with the maximum overlapping predicted segment.",
"[Dataset, Experimental, Setup].",
"1 For the second task, we post-process headers to remove stopwords, numbers and punctuation.",
"We then remove words that occur fewer than 20 times in the training data to get the final label vocabulary sizes.",
"Of note, we encountered a smaller label vocabulary for the bag-of-words generation task than that reported by Arnold et al..",
"For the four datasets, the original reported sizes of the header vocabularies were: [1.5k 1.0k, 2.8k, 1.1k].",
"When reproducing earlier results, we verified with the dataset authors that the actual sizes were: [179, 115, 603, 318].",
"The first task aligns closely with the clinical domain, in which headers are typically drawn from a fixed label set (Tepper et al., 2012).",
"The second aligns more closely with learning to segment and label from naturally labeled data, such as contracts or Wikipedia articles, which can potentially then be transferred (Koshorek et al., 2018).",
"Wiki-50.",
"The Wiki-50 dataset was introduced as a test set in Koshorek et al. (2018), which also introduced the full Wiki-727k dataset.",
"The dataset contains 50 randomly sampled Wikipedia articles, segmented and with their headers, and was used to evaluate computationally expensive methods such as BAYESSEG (Eisenstein and Barzilay, 2008).",
"datasets were introduced in Chen et al. (2009).",
"They provide two additional Wikipedia datasets with both segmentation and segment headers.",
"Clinical.",
"We use the Clinical Textbook dataset from Eisenstein and Barzilay (2008), which has segment boundaries but no headings.",
"We evaluate S-LSTM with previous document segmentation and segment labeling approaches on all four WikiSection datasets English-language Diseases ( en_disease ), German-language Diseases ( de_disease ), English-language Cities ( en_city ), and German-language Cities ( de_city )for both the single label and multi-label tasks.",
"Model Ablation.",
"In order to understand the effect of our proposed segment pooling and segment exploration strategies, we also include results for simpler baselines for each of these modules.",
"For the segment labeling we report not only the full S-LSTM model with LSTM pooling, but also additionally a mean pooling model, which we denote with \"-pool\".",
"For the segment exploration we report not only the model with exploration, but also a model only trained using teacher forcing, which we denote with \"-expl\".",
"WikiSection tasks ( en_disease and en_city ) on the Cities , Elements , Wiki-50 , and Clinical datasets.",
"Segmentation: Pk.",
"P k is a probabilistic measure (Beeferman et al., 1999) that works by running a sliding window of width k over the predicted and ground truth segments, and counting the number of times there is disagreement about the ends of the probe being in the same or different sections (see Figure 3).",
"The number of disagreements is then divided by the total number of window positions, resulting in a score normalized between 0 and 1. Our segmentation results are reported setting k to half the average size of ground truth segments.",
"Classification: F1, MAP, and Prec@1.",
"For classification, we report three different measures, depending on the task.",
"For the single label tasks, we report F 1 and Mean Average Precision (MAP).",
"For evaluating the bag-of-words (multi-label) tasks, we report Precision at the first rank position (Prec@1) and MAP.",
"In both cases, these are computed by first aligning the predicted segments with the ground truth segments as shown in Figure 2 and described in Section 3.4.",
"In all cases, the metrics are micro-averaged.",
"We report C99 (Choi, 2000), TopicTiling (Riedl and Biemann, 2012), and TextSeg (Koshorek et al., 2018) as baselines on WikiSection segmentation.",
"For a neural baseline, we report the SECTOR model (Arnold et al.) with pre-trained embeddings, denoted in the paper as SEC>T,H+emb.",
"For the additional datasets, we report GraphSeg (Glava et al., 2016), BayesSeg (Eisenstein and Barzilay, 2008) and pretrained TextSeg and SECTOR models.",
"In addition, we implemented an LSTM-LSTM-CRF IOB tagging model following Lample et al. (2016).",
"This is only used for the single-label experiments, as CRF-decoded IOB tagging models are more difficult to apply to the multilabel case.",
"For each task and dataset, we use the same set of hy-perparameters: Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001 and weight decay 0.9.",
"Dropout (Srivastava et al., 2014) is applied after each layer except the final classification layers; we use a single dropout probability of 0.1 for every instance.",
"For models with exploration, we employ teacher forcing for 10 epochs.",
"Model weights are initialized using Xavier normal initialization (Glo-rot and Bengio, 2010).",
"All LSTM hidden-layer sizes are set to 200.",
"We use fixed 300-dimensional FastText embeddings (Bojanowski et al., 2017) for both English and German, and project them down to 200 dimensions using a trainable linear layer.",
"There are five major takeaways from the experimental results and analysis.",
"First, the jointly trained S-LSTM model shows major improvement over prior work that modeled document segmentation and segment labeling tasks separately.",
"Second, segment alignment and exploration during training reduces error rates.",
"Third, the segment pooling layer leads to improvements for both segmentation and segment labeling.",
"Fourth, S-LSTM outperforms an IOB-tagging CRF-decoded model for single label segment labeling, and also generalizes easily WikiSection-topics single-label classification en_disease 27 topics de_disease 25 topics en_city 30 topics de_city 27 topics model configuration P k F 1 MAP P k F 1 MAP P k F 1 MAP P k F 1 MAP C99 37.4 n/a n/a 42.7 n/a n/a 36.8 n/a n/a 38.3 n/a n/a TopicTiling 43.4 n/a n/a 45.4 n/a n/a 30.5 n/a n/a 41.3 n/a n/a TextSeg 24.3 n/a n/a 35.7 n/a n/a 19.3 n/a n/a 27.5 n/a n/a SEC>T+emb 26.3 55.8 69.4 27.5 48.9 65.1 15.5 71.6 81.0 16.2 71.0 81.1 LSTM-LSTM-CRF 23.9 57.2 n/a 23.6 51.4 n/a 9.7 77.5 n/a 10.2 74.0 n/a S-LSTM 20.0 59.3 72.4 18.8 55.6 69.0 9.1 76.1 83.5 9.5 76.5 84.5 Table 1: WikiSection results.",
"and tractably to multi-labeling.",
"Fifth, a deeper analysis of the joint modeling demonstrates that segment labeling and segment bound prediction contain complementary information.",
"Tables 1 and 2 show that by explicitly predicting segment bounds we can improve segmentation by a large margin.",
"On the header prediction task (Ta-ble 2), we reduced P k by an average of over 30% across the WikiSection datasets.",
"P k was consistent across both WikiSection tasks, and did not degrade when going from single-label to multi-label prediction, as Arnold et al. had found.",
"This shows that we can achieve a more robust segmentation through jointly modeling segmentation and labeling.",
"This is also clear from Figure 4, where S-LSTM predicts a much more accurate segmentation.",
"The results of an ablation experiment (Table 2, bottom) show that there is an additional classification gain by allowing the model to explore recovering from segmentation errors.",
"Exploration has the important property of allowing the model to optimize more closely to how it is being evaluated.",
"This follows from a long line of work in NLP that shows that for tasks such as dependency parsing (Balles-teros et al., 2016), constituency parsing (Goodman, 1996), and machine translation (Och, 2003), all show improvements by optimizing on a loss that aligns with evaluation.",
"The teacher forcing was important at the beginning of model training.",
"When training variants of S-LSTM that did not use teacher forcing at the beginning, which instead could explore the bad segmentation, the segmentation failed to converge and the model performed universally poorly.",
"S-LSTM is capable of taking advantage of the complementary information by jointly learning to segment and label.",
"It is capable of learning to recover from segmentation errors by exploring towards the end of training.",
"But the ablation study shows that there is one more important component of S-LSTM that allows it to improve over previous baselines: LSTM pooling over segments.",
"The addition of the segment pooling layer improves MAP and Prec@1 across all four datasets in the heading prediction task (Table 2), comparing the model without exploration (S-LSTM,-expl) with the model without exploration (which uses average pooling: S-LSTM,-Segmentation Wiki-50 Cities Elements Clinical and multi-label classification P k MAP P k MAP P k MAP P k GraphSeg 63.6 n/a 40.0 n/a 49.1 n/a BayesSeg 49.2 n/a 36.2 n/a 35.6 n/a 57.8 TextSeg 18.2 * n/a 19.7* n/a 41.6 n/a 30.8 SEC>H+emb@en_disease 43.3 9.5 36.5 SEC>H+emb@en_city 40.5 13.4 33.3 53.6 41.0 7.9 S-LSTM@en_city 22.7 16.6 21.2 54.2 34.5 11.0 S-LSTM@en_disease 30.2 19.1 36.1 Table 3: Transfer results across four datasets. Those marked * are trained on the training portion of the corresponding dataset, whereas those without are either unsupervised or trained on a different dataset. For the Wiki-50 , Cities , and Elements datasets, S-LSTM outperforms all models not trained on corresponding training set. WikiSection-headings multi-label classification de_disease 115 topics model configuration P k P@1 MAP S-LSTM, w/o Segment Prediction n/a 42.3 52.1 S-LSTM, w/ Segment Prediction 19.1 43.3 53.3 Table 4: A model trained to jointly predict segment bounds and segment labels improves classification over a baseline which only predicts labels. Both are given oracle segment bounds and do not use exploration. WikiSection-headings document segmentation de_disease 115 topics model configuration P k P@1 MAP S-LSTM, w/o Segment Labeling 21.8 n/a n/a S-LSTM, w/ Segment Labeling 19.1 34.7 44.8 Table 5: Inverse of the experiment in Table 4. A model that jointly predicts segment bounds and labels outperforms a model that only predicts segment bounds. expl,-pool).",
"It is the combination of these three improvements that comprise the full S-LSTM.",
"In Table 1, the results demonstrate that S-LSTM outperforms LSTM-LSTM-CRF baseline in almost every case for single-labeling, and in every case for segmentation.",
"This makes S-LSTM a useful model choice for cases like clinical segmentation and labeling, where segments are drawn from a small fixed vocabulary.",
"S-LSTM also generalizes easily to multi-label problems, in contrast to an IOB-tagging LSTM-LSTM-CRF, since it only requires changing the segment-pooling loss from cross-entropy to binary cross-entropy.",
"Though we compare with TextSeg (a neural model that predicts segment bounds) and SECTOR (a neural model that predicts sentence labels and post hoc segments them) and show improvements compared to both models, we also directly test the hypothesis that the segmentation and segment labeling tasks contain complementary information.",
"To do so, we conduct two experiments: (1) we fix the segment bounds at training and evaluation time, only training the model to label known segments (results in Table 5); and (2) we only have the model predict segment bounds (results in Table 4).",
"In both cases, the addition of the loss from the companion task improves performance on the main task.",
"This shows that the two tasks contain complementary information, and directly validates our core hypothesis that the two tasks are tightly interwoven.",
"Thus, considering them jointly improves performance on both tasks.",
"In this paper we introduce the Segment Pooling LSTM (S-LSTM) model for joint segmentation and segment labeling tasks.",
"We find that the model dramatically reduces segmentation error (by 30% on average across four datasets) while improving segment labeling accuracy compared to previous neural and non-neural baselines for both single-label and multi-label tasks.",
"Experiments demonstrate that jointly modeling the segmentation and segment labeling, segmentation alignment and exploration, and segment pooling each contribute to S-LSTM's improved performance.",
"usefulness of transformer-based language models as sentence encoders.",
"There are additional engineering challenges associated with using models such as BERT as sentence encoders, since encoding entire documents can be too expensive to fit on a GPU without model parallelism.",
"We would also like to investigate the usefulness of an unconsidered source of document structure: the hierarchical nature of sections and subsections.",
"Like segment bounds and headers, this structure is naturally available in Wikipedia.",
"Having shown that segment bounds contain useful supervisory signal, it would be interesting to examine if segment hierarchies might also contain useful signal.",
"The authors would like to thank Sebastian Arnold for his feedback and responsiveness.",
"We would also like to thank others for their feedback, including Franck Dernoncourt, Sasha Spala, Nick Miller, Han-Chin Shing, Pedro Rodriguez, Denis Peskov, and Yogarshi Vyas.",
"This work was supported through Adobe Gift Funding, which supports an Adobe Research-University of Maryland collaboration.",
"It was completed while the primary author was interning at Adobe Research."
] | [
"abstain",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"objective",
"objective",
"result",
"method",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Abstract When working with textual data, a natural application of disentangled representations is fair classification where the goal is to make predictions without being biased (or influenced) by sensitive attributes that may be present in the data (e.g., age, gender or race).",
"Dominant approaches to disentangle a sensitive attribute from textual representations rely on learning simultaneously a penalization term that involves either an adversarial loss (e.g., a discriminator) or an information measure (e.g., mutual information).",
"However, these methods require the training of a deep neural network with several parameter updates for each update of the representation model.",
"As a matter of fact, the resulting nested optimization loop is both time consuming, adding complexity to the optimization dynamic, and requires a fine hyperparame-ter selection (e.g., learning rates, architecture).",
"In this work, we introduce a family of regularizers for learning disentangled representations that do not require training.",
"These regularizers are based on statistical measures of similarity between the conditional probability distributions with respect to the sensitive attributes.",
"Our novel regularizers do not require additional training, are faster and do not involve additional tuning while achieving better results both when combined with pretrained and randomly initialized text encoders.",
"As natural language processing (NLP) systems are taken up in an ever wider array of sectors (e.g., legal system (Dale, 2019), insurance (Ly et al., 2020), education (Litman, 2016), healthcare (Basyal et al., 2020)), there are growing concerns about the harmful potential of bias in such systems (Leidner and Plachouras, 2017).",
"Recently, a large body of research aims at analyzing, understanding and addressing bias in various applications of NLP including language modelling (Liang et al., 2021), machine translation (Stanovsky et al., 2019), toxic-40 20 0 20 40 20 10 0 10 20 30 Original T=0 male female",
"Fig. 1: PCA followed by a T-SNE projection of BERT embeddings of the sentences of DIAL corpus after T=0,10,1000 iterations of our framework (based on Sinkhorn divergence).",
"Colors display the sensitive (i.e., binary gender) attribute.",
"ity detection (Dixon et al., 2018) and classification (Elazar and Goldberg, 2018).",
"In NLP, current systems often rely on learning continuous embedding of the input text.",
"Thus, it is crucial to ensure that the learnt continuous representations do not exhibit bias that could cause representational harms (Blodgett et al., 2020; Barocas et al., 2017), i.e., representations less favourable to specific social groups.",
"One way to prevent the aforementioned phenomenon is to enforce disentangled representations, i.e., representations that are independent of a sensitive attribute (see Fig. 1 for a visualization of different degrees of disentangled representations).",
"Learning disentangled representations has received a growing interest as it has been shown to be useful for a wide variety of tasks (e.g., style transfer (Fu et al., 2017), few shot learning (Karn et al., 2021), fair classification (Colombo et al., 2021d)).",
"For text, the dominant approaches to learn such representations can be divided into two classes.",
"The first one, relies on an adversary that is trained to recover the discrete sensitive attribute from the latent representation of the input (Xie et al., 2017).",
"However, as pointed out by Barrett et al. (2019), even though the adversary seems to do a perfect job during training, a fair amount of the sensitive information can be recovered from the latent representation when training a new adversary from scratch.",
"The second line of research involves a regularizer that is a train-2614 able surrogate of the mutual information (MI) (e.g., CLUB (Cheng et al., 2020a), MIReny (Colombo et al., 2021d), KNIFE (Pichler et al., 2020), MINE (Belghazi et al., 2018; Colombo et al., 2021b)) and achieves higher degrees of disentanglement.",
"However, as highlighted by recent works (McAllester and Stratos, 2020; Song and Ermon, 2019), these estimators are hard to use in practice and the optimization procedure (see App. D.4) involves several updates of the regularizer parameters at each update of the representation model.",
"As a consequence, these procedures are both time consuming and involve extra hyperparameters (e.g., optimizer learning rates, architecture, number of updates of the nested loop) that need to be carefully selected which is often not such an easy task.",
"Contributions.",
"In this work, we focus our attention on learning to disentangle textual representations from a discrete attribute.",
"Our method relies on a novel family of regularizers based on discrepancy measures.",
"We evaluate both the disentanglement and representation quality on fair text classification.",
"Formally, our contribution is two-fold: (1) A novel formulation of the problem of learning disentangled representations.",
"Different from previous workseither minimizing a surrogate of MI or training an adversarywe propose to minimize a statistical measure of similarity between the underlying probability distributions conditioned to the sensitive attributes.",
"This novel formulation allows us to derive new regularizers with convenient properties:",
"(i) not requiring additional learnable parameters;",
"(ii) alleviating computation burden; and",
"(iii) simplifying the optimization dynamic.",
"(2) Applications and numerical results.",
"We carefully evaluate our new framework on four different settings coming from two different datasets.",
"We strengthen the experimental protocol of previous works (Colombo et al., 2021d; Ravfogel et al., 2020) and test our approach both on randomly initialized encoder (using RNN-based encoder) and during fine-tuning of deep contextualized pretrained representations 1 .",
"Our experiments are conducted on four different main/sensitive attribute pairs and involve the training of over 280 deep neural networks.",
"Our findings show that:",
"(i) disentanglement methods behave differently when applied to randomly initialized or to deep contextualized pretrained encoder; and",
"(ii) our framework offers a 1 Previous works (e.g., (Ravfogel et al., 2020)) do not fine-tune the pretrained encoder when testing their methods.",
"better accuracy/disentanglement trade-off than existing methods (i.e., relying on an adversary or on a MI estimator) while being faster and easier to train.",
"Model, data and code are available at https:// github.com/PierreColombo/TORNADO .",
"Considering a tuple ( X, S ) where X is a random variable (r.v.) defined on the space of text X and S is a binary r.v. which corresponds to a sensitive attribute.",
"Learning disentangled representations aims at learning the parameter of the encoder f : X Z R d which maps X to a latent representation Z = f ( X ) R d , where d N corresponds to the dimension of the embedding space.",
"The goal is that Z retains as much useful information from X while being oblivious of S .",
"Among the numerous possible applications for disentangled representations, we choose to focus on fair classification as it is a natural task to define the aforementioned useful information.",
"In the fair classification task, we assume access to Y , a binary r.v., which corresponds to the main label/attribute.",
"In order to learn disentangled representations for fair classification, we follow previous works (Beu-tel et al., 2017; Cheng et al., 2020b) and we will be minimizing the loss L ( , , ) , which is defined as follows: CE (cid:0) C ( f ( X )) , Y (cid:1) (cid:124) (cid:123)(cid:122) (cid:125) target task + R (cid:0) f ( X ) , S ; (cid:1) (cid:124) (cid:123)(cid:122) (cid:125) regularizer , (1) where C : Z Y refers to the main classifier; to its learnable parameters; CE to the cross-entropy loss; R denotes the disentanglement regularizer; its parameters and controls the trade-off between disentanglement and success in the classification task.",
"We next review the two main methods that currently exist for learning textual disentangled representations: adversarial-based and MI-based .",
"In the context of disentangled representation learning, a popular method is to rely on adding an adversary to the encoder (e.g., texts (Coavoux et al., 2018), images (Xie et al., 2017), categorical data (Beutel et al., 2017)).",
"This adversary is competing against the encoder trying to learn the main task objective.",
"In this line of work, R ( f ( X ) , S ; ) = CE ( C ( f ( X )) , S ) where C : Z S refers to the adversarial classifier that is trained to minimize CE ( C ( f ( X )) , S ) .",
"Denoting by PZ | S =0 2615 and PZ | S =1 the probability distribution of the conditional r.v. Z | S = 0 and Z | S = 1 , respectively, these works build on the fact that if PZ | S =0 and PZ | S =1 are different, the optimal adversary will be able to recover sensitive information from the latent code Z .",
"Although adversaries have achieved impressive results in many applications when applied to attribute removal, still a fair amount of information may remain in the latent representation (Lample et al., 2018).",
"To better protect sensitive information, the second class of methods involves direct mutual information minimization.",
"MI lies at the heart of information theory and measures statistical dependencies between two random variables Z and S and find many applications in machine learning (Boudiaf et al., 2020b,a, 2021).",
"The MI is a non-negative quantity that is 0 if and only if Z and S are independent and is defined as follows: I ( Z ; S ) = KL ( PZS PZ PS ) , (2) where the joint probability distribution of ( Z, S ) is denoted by PZS ; marginals of Z and S are denoted by PZ and PS respectively; and KL stands for the Kullback-Leibler divergence.",
"Although computing the MI is challenging (Paninski, 2003; Pichler et al., 2020), a plethora of recent works devise new lower (Belghazi et al., 2018; Oord et al., 2018) and upper bounds (Cheng et al., 2020a; Colombo et al., 2021d) I ( f ( X ); S ) where denotes the trainable parameters of the surrogate of the MI.",
"In that case, R ( f ( X ) , S ; ) = I ( f ( X ); S ) .",
"These methods build on the observation that if I ( Z ; X ) > 0 then PZ | S =0 and PZ | S =1 are different and information about the sensitive label S remains in Z .",
"Interestingly, these approaches achieve better results than adversarial training on various NLP tasks (Cheng et al., 2020b) but involve the use of additional (auxiliary) neural networks.",
"The aforementioned methods involve the use of extra parameters (i.e., ) in the regularizer.",
"As the regularizer computes a quantity based on the representation given by the encoder with parameter , any modification of requires an adaptation of the parameter of R (i.e., ).",
"In practice, this adaptation is performed using gradient descent-based algorithms and requires several gradient updates.",
"Thus, a nested loop (see App. D.4) is needed.",
"Additional optimization parameters and the nested loop both induce additional complexity and require a fine-tuning which makes these procedures hard to be used on large-scale datasets.",
"To alleviate these issues, the next section describes a parameter-free framework to get rid of the parameter present in R .",
"This section describes our approach to learn disentangled representations.",
"We first introduce the main idea and provide an algorithm to implement the general loss.",
"We next describe the four similarity measures proposed in this approach.",
"As detailed in Section 2, existent methods generally rely on the use of neural networks either in the form of an adversarial regularizer or to compute upper/lower bounds of the MI between the embedding Z = f ( X ) and the sensitive attribute S .",
"Motivated by reducing the computational and complexity load, we aim at providing regularizers that are light and easy to tune.",
"To this end, we need to get rid of the nested optimization loop, which is both time consuming and hard to tune in practice since the regularizer contains a large number of parameters (e.g., neural networks) that need to be trained by gradient descent.",
"Contrarily to previous works in the literature, and following the intuitive idea that PZ | S =0 and PZ | S =1 should be as close as possible, we introduce similarity measures between PZ | S =0 and PZ | S =1 to build a regularizer R .",
"It is worth noting that the similarity measures do not require any additional learnable parameters.",
"For the sake of clarity, in the reminder of the paper we define P i PZ | S = i and Z i f ( X | S = i ) for i { 0 , 1 } .",
"Given a similarity measure defined as SM : M 1+ ( Z ) M 1+ ( Z ) R + where M 1+ ( Z ) denotes the space of probability distributions on Z , we propose to regularize the downstream task by SM ( P 0 , P 1 ) .",
"Precisely, the optimization problem boils down to the following objective: L ( , ) = CE ( C ( f ( X )) , Y ) (cid:124) (cid:123)(cid:122) (cid:125) target task + SM ( P 0 , P 1 ) (cid:124) (cid:123)(cid:122) (cid:125) regularizer .",
"The proposed statistical measures of similarity, detailed in Section 3.2, have explicit and simple formulas.",
"It follows that the use of neural networks is no longer necessary in the regularizer term which 2616 reduces drastically the complexity of the resulting learning problem.",
"The disentanglement can be controlled by selecting appropriately the measure SM.",
"For the sake of place, the algorithm we propose to solve (3) is deferred to the App.",
"B. 3.2 Measure of Similarity between Distributions In this work, we choose to focus on four different (dis-) similarity functions ranging from the most popular in machine learning such as the Maximum Mean Discrepancy measure (MMD) and the Sinkhorn divergence (SD) to standard statistical discrepancies such as the Jeffrey divergence (J) and the Fisher-Rao distance (FR).",
"Let k : Z Z R be a kernel and H its corresponding Reproducing Kernel Hilbert Space with inner product",
"., .",
"H and norm .",
"H .",
"Denote by BH = { f | f H 1 } the unit ball of H .",
"The Maximum Mean Discrepancy (MMD) (Gretton et al., 2007) between the two conditional distributions P 0 , P 1 M 1+ ( Z ) associated with the kernel k , is defined as: MMD ( P 0 , P 1 ) = sup B H (cid:12)(cid:12) EP 0 [( Z 0 )] EP 1 [( Z 1 )] (cid:12)(cid:12) = EP 0 P 0 [ k ( Z 0 , Z 0 )] + EP 1 P 1 [ k ( Z 1 , Z 1 )] 2 EP 0 P 1 [ k ( Z 0 , Z 1 )] .",
"The MMD can be estimated with a quadratic computational complexity O ( n 2 ) where n is the sample size.",
"In this paper, MMD is computed using the Gaussian kernel k : ( z 0 , z 1 ) (cid:55) exp( z 0 z 1 2 / 2 2 ) , where is the usual euclidean norm.",
"The Wasserstein distance aims at comparing two probability distributions through the resolution of the Monge-Kantorovich mass transportation problem (see e.g. Villani (2003); Peyr and Cuturi (2019)): W p ( P 0 , P 1 ) = min U ( P 0 , P 1 ) (cid:90) ZZ z 0 z 1 p d ( z 0 , z 1 ) , (4) where U ( P 0 , P 1 ) = { M 1 + ( Z Z ) : (cid:82) ( z 0 , z 1 ) dy = P 0 ( z 0 ); (cid:82) ( z 0 , z 1 ) dx = P 1 ( z 1 ) } is the set of joint probability distributions with marginals P 0 and P 1 .",
"For the sake of clarity, the power p in W p is omitted in the remainder of the paper.",
"When P 0 and P 1 are discrete measures, (4) is a linear problem and can be solved with a supercubic complexity O ( n 3 log( n )) , where n denotes the sample size.",
"To overcome this computational drawback, Cuturi et al. (2013) added an entropic regularization term to the transport cost to obtain a strongly convex problem solvable using the Sinkhorn-Knopp algorithm (Sinkhorn, 1964) leading to a computational cost of O ( n 2 ) .",
"The bias introduced by the regularization term, i.e., the quantity is not longer zero when comparing to the same probability distribution, have been corrected by Genevay et al. (2019) leading to the known Sinkhorn Divergence (SD) defined as: SD ( P 0 , P 1 ) = W ( P 0 , P 1 ) 1 2 1 (cid:88) i =0 W ( P i , P i ) , where W ( P 0 , P 1 ) is equal to min U ( P 0 , P 1 ) (cid:90) ZZ z 0 z 1 p d ( z 0 , z 1 ) + H ( ) , with H ( ) = (cid:82) ( z 0 , z 1 ) log( ( z 0 , z 1 )) dz 0 dz 1 .",
"The Fisher-Rao distance (FR) (Rao, 1945) is a Riemannian metric defined on the space of parametric distributions relying on the Fisher information.",
"The Fisher information matrix provides a natural Riemannian structure (Amari, 2012).",
"It is known to be more accurate than popular divergence measures (Costa et al., 2015).",
"Let M 1+ ( Z , P ) be the family of parametric distributions with the parameter space P R d .",
"The FR distance is defined as the geodesic distance 2 between elements (i.e., probability measures) on the manifold M 1+ ( Z , P ) .",
"Parametrizing P 0 , P 1 by parameters p 1 , p 2 P , respectively, such that P p 0 0 P 0 and P p 1 1 P 1 , the FR distance between P p 0 0 and P p 1 1 is defined as: FR ( P p 0 0 , P p 1 1 ) = min (cid:90) | (cid:113) ( t ) G ( p 0 , p 1 ) ( t ) | dt (5) where ( t ) is the curve connecting p 0 and p 1 in the parameter space P ; and G ( p 0 , p 1 ) is the Fisher information matrix of ( p 0 , p 1 ) .",
"In general, the optimization problem of (5) can be solved using the well-known Euler-Lagrange differential equations leading to computational difficulties.",
"Atkinson and Mitchell (1981) have provided computable closed-form for specific families of distributions such as 2 The geodesic is the curve that provides the shortest length.",
"Multivariate Gaussian with diagonal covariance matrix.",
"Under this assumption, the parameters p 0 and p 1 are defined by p i,j = ( i,j , i,j ) R 2 for i { 0 , 1 } and 1 j d with i R d the mean vector and Diag ( i ) the diagonal covariance matrix of P i where i is the variance vector.",
"The resulting FR metric admits the following closed-form (see e.g. Pinele et al. (2020): FR ( P p 0 0 , P p 1 1 ) = (cid:118)(cid:117)(cid:117)(cid:116) d (cid:88) j =1 [ d FR ( p 0 ,j , p 1 ,j )] 2 , where d FR ( p 0 ,j , p 1 ,j ) is the univariate Fisher-Rao detailed in the App.",
"The Jeffrey divergence (J) is a symmetric version of the Kullback-Leibler (KL) divergence and measures the similarity between two probability distributions.",
"Formally, it is defined as follow: J ( P 0 , P 1 ) = 1 2 (cid:2) KL ( P 0 P 1 ) + KL ( P 1 P 0 ) (cid:3) .",
"Computing the KL ( P 0 P 1 ) either requires to have knowledge of P 0 and P 1 , or to have knowledge about the density ratio (Rubenstein et al., 2019).",
"Without any further assumption on P 0 , P 1 or the density ratio, the resulting inference problem is known to be provably hard (Nguyen et al., 2010).",
"Although previous works have addressed the estimation problem without making assumptions on P 0 and P 1 (Oord et al., 2018; Hjelm et al., 2018; Belghazi et al., 2018), these methods often involve additional parameters (e.g., neural networks (Song and Ermon, 2019), kernels (McAllester and Stratos, 2020)), require additional tuning (Hershey and Olsen, 2007), and are time expensive.",
"Motivated by speed, simplicity and to allow for fair comparison with FR, for this specific divergence, we choose to make the assumption that P 0 and P 1 are multivariate Gaussian distributions with mean vector 0 and 1 and diagonal covariance matrices: 0 and 1 .",
"Thus, KL ( P 0 , P 1 ) boils down to: log | 0 | | 1 | d + Tr ( 1 0 1 )+( 0 1 ) T 1 0 ( 0 1 ) , where Tr ( 1 0 1 ) is the trace of 1 0 1 .",
"Remark.",
"FR and J are computed under the multivariate Gaussian with diagonal covariance matrix assumption.",
"In this case, the Sinkhorn approximation is not needed as (4) can be efficiently computed thanks to the following closed-form: W( P 0 , P 1 )= 0 1 2 +Tr (cid:16) 0 + 1 2( 0 1 ) 1 / 2 (cid:17) Remark.",
"Quantities defined in this section are replaced by their empirical estimate.",
"Due to space constraints, the formula are described in App.",
"A.2.",
"In this section, we describe the datasets, metrics, encoder and baseline choices.",
"Additional experimental details can be found in App.",
"D. For fair comparison, all models were re-implemented.",
"To ensure backward comparison with previous works, we choose to rely on the DIAL (Blodgett et al., 2016) and the PAN (Rangel et al., 2014) datasets.",
"For both, main task labels ( Y ) and sensitive labels ( S ) are binary, balanced and splits follow (Barrett et al., 2019).",
"Random guessing is expected to achieve near 50% of accuracy.",
"The DIAL corpus has been automatically built from tweets and the main task is either polarity 3 or mention prediction.",
"The sensitive attribute is related to race (i.e., non-Hispanic blacks and non-Hispanic whites) which is obtained using the author geo-location and the words used in the tweet.",
"The PAN corpus is also composed of tweets and the main task is to predict a mention label.",
"The sensitive attribute is obtained through a manual process and annotations contain the age and gender information from 436 Twitter users.",
"For the choice of the evaluation metrics, we follow the experimental setting of Colombo et al. (2021d); Elazar and Goldberg (2018); Coavoux et al. (2018).",
"To measure the success of the main task, we report the classification accuracy.",
"To measure the degree of disentanglement of the latent representation we train from scratch an adversary to predict the sensitive labels from the latent representation.",
"In this framework, a perfect model would achieve a high main task accuracy (i.e., near 100%) and a low (i.e., near 50%) accuracy as given by the adversary prediction on the sensitive labels.",
"Following Colombo 3 Polarity or emotion have been widely studied in the NLP community (Jalalzai et al., 2020; Colombo et al., 2019) 2618 RNN BERT Dat.",
"Tab.",
"1: Results on the fair classification task: the main task (higher is better) accuracy corresponds to the column with Y ( ) and S ( ) denotes the sensitive task accuracy (lower is better).",
"CE refers to a classifier trained with CE loss solely ( = 0 in (1)).",
"et al. (2021d), we also report the disentanglement dynamic following variations of and train a different model for each [0 . 001 , 0 . 01 , 0 . 1 , 1 , 10] .",
"Choice of the encoder.",
"Previous works that aim at learning disentangled representations either focus on randomly initialized RNN-encoders (Colombo et al., 2021d; Elazar and Goldberg, 2018; Coavoux et al., 2018) or only use pretrained representations as a feature extractor (Ravfogel et al., 2020).",
"In this work, we choose to fine-tune BERT during training as we believe it to be a more realistic setting.",
"Choice of the baseline models.",
"We choose to compare our methods against adversarial training from Elazar and Goldberg (2018); Coavoux et al. (2018) (model named ADV) and the recently MI bound introduced in (Colombo et al., 2021d) (named MI) which has been shown to be more controllable than previous MI-based estimators.",
"In this section, we gather experimental results for fair classification task.",
"We study our framework when working either with RNN or BERT encoders.",
"The parameter (see (3)) controls the trade-off between success on the main task and disentanglement for all models.",
"Y are tightly entangled.",
"By comparing Fig. 2 and Fig. 3, we notice that the race label (main task) is easier to disentangled from the sentiment compared to the mention.",
"Randomly initialized RNN encoders.",
"To allow a fair comparison with previous works, we start by testing our framework with RNN encoders on the DIAL dataset.",
"Results are depicted in Fig. 2. It is worth mentioning that we are able to observe a similar phenomenon that the one reported in Colombo et al. (2021d).",
"More specifically, we observe:",
"(i) the adversary degenerates for = 10 and does not allow to reach perfectly disentangled representations nor to control the desirable degree of disentanglement;",
"(ii) the MI allows better control over the desirable degree of disentanglement and achieves better-disentangled representations at a reduced cost on the main task accuracy.",
"Fig. 2 shows that the encoder trained using the statistical measures of similarityboth with and without the multivariate Gaussian assumptionare able to learn disentangled representations.",
"We can also remark that our losses follow an expected behaviour: when increases, more weight is given to the regularizer, the sensitive task accuracy decreases, thus the representations are more disentangled according to the probing-classifier.",
"Overall, we observe that the W regularizer is the best performer with optimal performance for = 1 on both attributes.",
"On the other hand, we observe that FR and J divergence are useful to learn to disentangle the representations but disentangling using these similarity measures comes with a greater cost as compared to W. Both MMD and SD also perform well 4 and are able to learn disentangled representations with little cost on the main task performance.",
"However, on DIAL , they are not able to learn perfectly disentangled representations.",
"Similar conclusions can be drawn on PAN and results are reported in App.",
"C.1.",
"BERT encoder.",
"Results of the experiment conducted with BERT encoder are reported in Fig. 3. As expected, we notice that on both tasks the main and the sensitive task accuracy for small values of is higher than when working with RNN encoders.",
"When training a classifier without disentanglement constraints (i.e., case = 0 in (1)), which corresponds to the dash lines in Fig. 2 and Fig. 3, we observe that BERT encoder naturally preserves more sensitive information (i.e., measured 4 For both losses when > 10 we did not remark any consistent improvements. 2619 by the accuracy of the adversary) than randomly initialized encoder.",
"Contrarily to what is usually undertaken in previous works (e.g., Ravfogel et al. (2020)), we allow the gradient to flow in BERT encoder while preforming fine-tuning.",
"We observe a different behavior when compared to previous experiments.",
"Our losses under the Multivariate diagonal Gaussian assumption (i.e., W, J, FR ) can only disentangle the representations at a high cost on the main task (i.e., perfect disentanglement corresponds to performance on the main task close to a random classifier).",
"When training the encoder with either SD or MMD, we are able to learn disentangled representations with a limited cost on the main task accuracy: = 0 .",
"1 achieves good disentanglement with less than 3% of loss in the main task accuracy.",
"The methods allow little control over the degree of disentanglement and there is a steep transition between light protection with no loss on the main task accuracy and strong protection with discriminative features destruction.",
"Fig. 2: Results on DIAL with RNN.",
"Dash lines correspond to model trained with CE loss solely (i.e., case = 0 in (1)).",
"Figures on the left are dedicated to the mention attribute while the one on the rights reports results on the Sentiment attribute.",
"The main task consists in predicting Y thus higher is better.",
"The sensitive task accuracy is obtained by training a classifier to S on the final representation thus an ideal model would reach 50% of accuracy.",
"Takeaways.",
"Our new framework relying on statistical Measures of Similarity introduces powerful methods to learn disentangled representations.",
"When working with randomly initialized RNN encoders to learn disentangled representation, we advise relying on W. Whereas in presence of pretrained encoders (i.e., BERT), we observe a very different behavior 5 and recommend using SD.",
"Fig. 3: Results on DIAL for mention (left) and sentiment (right) attribute using a pretrained BERT.",
"We report in Table 2 the training time and the number of parameters of each method.",
"The reduced number of parameters brought by our method is marginal, however getting rid of these parameters is crucial.",
"Indeed, they require a nested loop and require a fined selection of the hyperparameters which complexify the global system dynamic.",
"Takeaways.",
"Contrarily to MI or Adversarial based regularizer that are difficult (or even prohibitive) to be implemented on large-scale datasets, our framework is simpler and consistently faster which makes it a better candidate when working with large-scale datasets.",
"Results presented in Section 5.1 have shown a different behaviour for RNN and BERT based encoders and, for different measures of similarity.",
"Tab.",
"2: Speed and number of model parameters (given in thousand) when working with DIAL .",
"The runtime for 1 gradient update (denoted 1 upd.) or for 1 epoch is given for a batch of 64 when running our models on a single NVIDIA-V100.",
"The relative improvements (in %) are given with respect to the MI model, which is our strongest baseline.",
"Here, we aim at understanding of this phenomena.",
"In the previous section, we examine the change the measures during the training.",
"Takeaways.",
"When using a RNN encoder, the system is able to maximize the main task accuracy while jointly minimizing most of the similarity measures.",
"For BERT where the model is more complex, for measures relying on the diagonal gaussian multivariate assumption either the disentanglement plateau (e.g., FR or J) or the system fails to learn discriminative features and perform poorly on the main task (e.g., W).",
"When combined with BERT both SD and MMD can achieve high main task accuracy while protecting the sensitive attribute.",
"In this experiment, we investigate how predictive of the disentanglement is each similarity measure, i.e., does a lower value of similarity measure indicates better disentangled representations?",
"We gather for both the mention and sentiment attribute 5 checkpoints per model (i.e., each regularizer and each value of corresponds to one model).",
"For each RNN model, we select one checkpoint after 5k , 10k , 15k , 20k , 25k gradient updates, and for BERT we select one checkpoint after 2k , 4k , 6k , 8k , 10k gradient updates to obtain the same number of models.",
"For each type of loss, we ended with 50 models.",
"For each model and each checkpoint, we train an adversary, compute the sensitive task accuracy and evaluate the Pearson correlation between the sensitive task accuracy and the corresponding similarity measure.",
"Results are presented in Fig. 5.",
"Takeaways.",
"Both ADV and MI poorly are correlated with the degree of disentanglement of the learned representations.",
"We find this result not surprising at light of the findings of Xie et al. (2017) and Song and Ermon (2019).",
"All our losses achieve high correlation ( 78 ) except for J in the mention task with both encoders, and the FR with BERT on the mention task that achieves medium/low correlation.",
"We believe, that the high correlation showcases the validity of the proposed approaches.",
"Concluding Remarks",
"Malik Boudiaf, Ziko Imtiaz Masud, Jrme Rony, Jose Dolz, Ismail Ben Ayed, and Pablo Piantanida.",
"2021.",
"Mutual-information based few-shot classification.",
"arXiv preprint arXiv:2106.12252 .",
"Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio.",
"2014.",
"Empirical evaluation of gated recurrent neural networks on sequence modeling.",
"arXiv preprint arXiv:1412.3555 .",
"Maximin Coavoux, Shashi Narayan, and Shay B Cohen.",
"2018.",
"Privacy-preserving neural representations of text.",
"arXiv preprint arXiv:1808.09408 .",
"Pierre Colombo, Emile Chapuis, Matthieu Labeau, and Chlo Clavel.",
"2021a.",
"Code-switched inspired losses for spoken dialog representations.",
"In Proceedings of the 2021 Conference on Empirical Methods in 2622 Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021 , pages 83208337.",
"We have introduced a new framework for learning disentangled representations which is faster to train, easier to tune and achieves better results than adversarial or MI-based methods.",
"Our experiments on the fair classification task show that for RNN encoders, our methods relying on the closed-form of similarity measures under a multivariate Gaussian assumption can achieve perfectly disentangled representations with little cost on the main tasks (e.g. using Wasserstein).",
"On BERT representations, our experiments show that the Sinkhorn divergence should be preferred.",
"It can achieves almost perfect disentanglement at little cost but allows for fewer control over the degree of disentanglement."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"The performance of deep learning models in NLP and other fields of machine learning has led to a rise in their popularity, and so the need for explanations of these models becomes paramount.",
"Attention has been seen as a solution to increase performance, while providing some explanations.",
"However, a debate has started to cast doubt on the explanatory power of attention in neural networks.",
"Although the debate has created a vast literature thanks to contributions from various areas, the lack of communication is becoming more and more tangible.",
"In this paper, we provide a clear overview of the insights on the debate by critically confronting works from these different areas.",
"This holistic vision can be of great interest for future works in all the communities concerned by this debate.",
"We sum up the main challenges spotted in these areas, and we conclude by discussing the most promising future avenues on attention as an explanation.",
"Attention mechanisms have been widely used in various tasks of Natural Language Processing (NLP) as well as in other fields of machine learning (e.g., Computer Vision (Mnih et al., 2014; Li et al., 2019)).",
"These mechanisms draw insight from the intuition that humans build the representation of a whole scene by dynamically focusing on relevant parts at different times (Rensink, 2000).",
"The general form of attention has been named differently according to authors (alignment model (Bahdanau et al., 2015) and attention mechanism (Vaswani et al., 2017)).",
"In essence, the attention function maps a query Q and keys K to scalar scores (Vaswani et al., 2017).",
"These scores are fed to a softmax function, in turn producing a set of attention weights that are then applied to values V .",
"Different kinds of attention are thus possible according to how many keys are attended to T. Franois and P. Watrin are co-last authors.",
"(global vs. local attention, according to Luong et al. (2015)) and where the query is generated (cross vs. self-attention as in the works of Bah-danau et al. (2015) and Vaswani et al. (2017)).",
"In this paper, we focus on attention regardless of these technical differences.",
"There are mainly two ways of computing the attention weights : Bah-danau et al. (2015) introduced additive attention = softmax ( w 3 T tanh ( W 1 K + W 2 Q )) , where w 3 , W 1 , W 2 model parameters to be learned, and Vaswani et al. (2017) introduced scaled dot-product attention = softmax (cid:16) KQ m (cid:17) , where m represents the dimension of K .",
"These two forms are theoretically similar (Vaswani et al., 2017) and generally give the same results (Jain and Wallace, 2019), the dot-product form being faster on certain tasks from a practical point of view.",
"Since the introduction of attention mechanisms in the literature, many have seen the opportunity to use the weights for explaining neural networks (e.g., Xu et al. (2015); Martins and Astudillo (2016); Choi et al. (2016); Xie et al. (2017); Mul-lenbach et al. (2018)).",
"Indeed, the attention weights link the input to the remaining of the network with the aim of performing a certain task, and are trained to do so through back-propagation.",
"This link between the input and the remaining of the network is used to work on explainability , which in machine learning and NLP is defined as the capacity to explain a non-interpretable (Bibal and Frnay, 2016), i.e., black-box, model (Guidotti et al., 2018).",
"The two major ways to explain black-box models are global explanations, providing clues about the behavior of the model as a whole, and local explanations, explaining particular decisions.",
"Using attention to explain neural networks mainly pertains to the latter, even if some authors study attention for global explanation (e.g., Clark et al. (2019)).",
"Explanations can also be faithful (how close the explanation is to the inner workings of the model) (Rudin, 2019; Jacovi and Goldberg, 2020), 3889 or plausible (does the user consider the explanation of the model plausible?) (Riedl, 2019; Jacovi and Goldberg, 2020).",
"It should be noted that explanation presupposes some degree of transparency to the user, whether it is faithful or plausible.",
"Indeed, disregarding this aspect would entail that the most faithful explanation is the black-box model itself.",
"Recently, a debate fundamentally questioned whether attention can be used as explanation (Jain and Wallace, 2019).",
"An immediate response by Wiegreffe and Pinter (2019) challenged some of the arguments of Jain and Wallace (2019).",
"To this day, the debate about is attention explanation? continues and is the source of a rich and diverse literature.",
"Researchers from different areas have mostly contributed to this debate without referring to works outside, and sometimes even inside, their area.",
"These insights include theoretical analyses of attention, the necessity to bring users in the loop, questioning the evaluation methodology for model explanation, and more.",
"This paper brings together the papers from these different areas in order to provide an outline of the quickly growing and vast literature on the subject.",
"Moreover, we discuss the lessons learned and highlight the main issues and perspectives.",
"To accurately reflect the debate, we only focus on papers that are posterior to the works of Jain and Wallace (2019) and Wiegreffe and Pinter (2019), and that explicitly rely on these two papers to contribute to the debate.",
"This paper proposes the first introduction to the debate about is attention explanation?.",
"The main contributions of this work are as follows: a summary and a discussion of the actual state of the debate by identifying convergences and disagreements in the literature; an extraction and structure of the main insights from papers of different areas that generally do not interact; and the bases for developing research on attention as explanation, with a more integrated state-of-the-art built upon a multitude of perspectives.",
"In order to present the different insights on the debate, we briefly summarize the two seminal papers (Section 2), describing the arguments of the two original papers that represent the source of the ongoing debate.",
"We also present survey papers that mention the debate within a broader context (Section 3).",
"We then investigate the different research perspectives we extracted from the literature (Sections 4 to 9).",
"Finally, we analyze the insights offered by those works and offer foundations to build upon for future research related to attention as explanation (Section 10).",
"Jain and Wallace (2019) make a set of observations on attention weights in a battery of experiments:",
"(i) an analysis of the correlations between attention weights and feature importance methods (gradient-based and leave-one-out) and",
"(ii) a study of the impact of counterfactual attention weight distributions on the final prediction by randomly shuffling the attention weights, and by shuffling them adver-sarially (i.e., by creating distributions that correspond to a focus on a different set of features than the one in the original attention distribution).",
"The experiments are performed on three tasks: binary text classification, question answering and natural language inference.",
"When commenting upon the results of their experiments, the authors' observations are:",
"(i) there are poor correlations between attention weights and gradient-based or leave-one-out methods for explanation and",
"(ii) shuffling the attention weights in a neural model does not affect the final prediction, except for some rare cases where the prediction relies on a few high precision tokens.",
"The conclusion they draw from the poor correlations with other explanation methods and the lack of exclusive explanation is that attention cannot be used as a means of explanation.",
"Wiegreffe and Pinter (2019) agree on the importance of the questions raised by Jain and Wallace (2019) and reply to their claims.",
"They agree with the first observation and the corresponding experimental setup.",
"However, they object to the second claim, stating that only modifying the attention weights in the model does not produce a real attention-based model.",
"Indeed, if the attention weights should be modified for experimental purposes, then the model should be retrained to correspond to a real trained model with those modified attention weights.",
"In addition, they also object to the exclusive explanation argument that attention is \" an explanation, not the explanation\"",
"(Wiegreffe and Pinter, 2019, p. 13).",
"Indeed, several plausible explanations can co-exist for a similar degree of faithfulness.",
"The clash between the initial use of attention as explanation and the 2019 studies debating over the validity of considering attention as an expla-3890 nation started a vast literature on the subject.",
"The following section presents survey papers that are mentioning the debate within a broader perspective.",
"Usually, when exploring a question, survey papers are a good starting point, as they have the advantage of covering a broader scope.",
"However, there is no in-depth introduction to the debate, as survey papers only briefly mention the debate and sometimes do not really add something significant for the discussion",
"(e.g., Chaudhari et al.",
"(2019)",
"and Lindsay",
"(2020)).",
"Please note that we only discuss surveys that add significant elements to the discussion.",
"Galassi et al.",
"(2020)",
"propose a survey on attention.",
"They recall the results of Jain and Wallace",
"(2019)",
"on the fact that attention may not be explanation, but also refer to the fact that only faithful explanations",
"(and not plausible ones; see Section 7)",
"are considered.",
"The explanation perspective of the survey is focused on the work of Zhang et al.",
"(2019), which discusses how well attention captures the importance of abstract features in multilayer neural networks when dealing with images.",
"Galassi et al.",
"(2020)",
"argue that an answer to the question is attention explanation? with image data may not generalize to text, and should be veri-fied, as human understanding mechanisms strongly differ between images and texts.",
"de Santana Correia and Colombini",
"(2021)",
"introduce the debate in broad terms in Section 5.7 of their survey, but point out that, based on the work of Vashishth et al.",
"(2019), the answer to the question is attention explanation? can take different shapes based on the NLP task that is studied",
"(see our Section 6 for more details on this point of the debate).",
"Later in their paper, they also mention, like Galassi et al.",
"(2020), that some works show that attention in transformers focuses on syntactical structures",
"(Voita et al., 2018; Vig and Belinkov, 2019; Tenney et al., 2019; Clark et al., 2019).",
"This indicates that global explanations based on attention can be provided, but do not answer the need for the local, decision-based, explanation that is mainly discussed in the debate.",
"Ras et al.",
"(2021)",
"also stress that the debate has been extended to several NLP tasks in the work of Vashishth et al.",
"(2019).",
"They add the information that mixed results have been obtained in the debate",
"(Serrano and Smith, 2019; Baan et al., 2019).",
"Contrary to the short introductions to the debate in these survey papers, we aim at providing a clear and rather exhaustive view of the different ways the debate is tackled in the literature.",
"The different insights on the debate, which are unfortunately not regrouped and discussed in these surveys",
"(because the debate is not their main focus), are numerous: some papers add arguments about the fact that attention is not explanation",
"(Section 4), provide analyses to explain why attention is not explanation",
"(Section 5), analyze the debate on different NLP tasks",
"(Section 6), discuss the methodological issues at the heart of the debate",
"(Section 7), evaluate the explanatory power of attention with humans",
"(Section 8), or propose solutions to make attention become explanation",
"(based on technical developments or on user-in-the-loop strategies)",
"(Section 9).",
"Table 1 presents an overview of all works discussed in our paper, with the task(s)",
"and architecture(s)",
"they study",
"(when applicable), and the section(s)",
"in which they appear.",
"Some works may be considered as the direct continuation of the arguments of Jain and Wallace",
"(2019)",
"by adding experiments that corroborate their findings, e.g., by showing that the comparison of attention with other explainable methods different from the gradient-based one leads to similar conclusions.",
"Serrano and Smith",
"(2019)",
"show that removing features considered as important by attention less often leads to a decision flip than removing features considered important by gradient-based methods.",
"This means that the features deemed important by attention for a decision are not so important for the model.",
"This, therefore, adds to the first argument of Jain and Wallace",
"(2019)",
"against the relevance of attention as an indicator of feature importance.",
"Thorne et al.",
"(2019)",
"demonstrate that applying LIME",
"(Ribeiro et al., 2016)",
"on an attention-based neural network can provide good explanations that the attention itself cannot provide.",
"They conclude on this subject that their experimental results are aligned with the ones of Jain and Wallace",
"(2019).",
"Mohankumar et al.",
"(2020)",
"investigate attention on top of LSTMs",
"(attention-LSTMs).",
"Their study focuses on why attention in such models neither provides plausible , nor faithful , explanations.",
"They use a variety of NLP tasks",
"(sentiment analysis, natural language inference, question answering and paraphrase detection)",
"and randomly permute atten-3891 Work Task Architecture Section Galassi et al.",
"tion weights as Jain and Wallace",
"(2019).",
"They find that attention-LSTM's outputs do not change much after the permutation and conclude that attention weights are not faithful explanations in attention-LSTMs.",
"The authors propose changes to attention-LSTMs to make attention a faithful explanation",
"(see Section 9.1).",
"Moreover, by analyzing the attention given to part-of-speech tags, they find that the model cannot provide a plausible explanation either, since, for several datasets, a significant amount of attention is given to punctuation.",
"Finally, Ethayarajh and Jurafsky",
"(2021)",
"show that attention weights are not Shapley values",
"(i.e., a method for feature importance)",
"(Lundberg and Lee, 2017).",
"This result is in line with Jain and Wallace",
"(2019)",
"on the fact that the attention weights do not correlate with other explanation techniques",
"(saliency maps or Shapley values).",
"The authors however note that attention flows",
"(i.e., an extension of attention weights obtained after postprocessing)",
"(Abnar and Zuidema, 2020)",
"are Shapley values, which may indicate that using attention in another way could lead to explanation.",
"Bai et al.",
"(2021)",
"show that attention can be put on uninteresting tokens because of an effect they call combinatorial shortcuts.",
"The key idea is that attention is calculated on the basis of a biased input: the attention mechanism will try to select biased features to adapt the biased estimations to minimize the overall loss functions",
"(Bai et al., 2021, p. 27).",
"For instance, if one adds random tokens",
"(such as A, B, and C)",
"to all documents in a corpus, one might find that some of these tokens are considered as important for the positive",
"(or negative)",
"class because their representation ends up being similar to the representation of good",
"(or bad), even if their information content for the task is negligible, as they are present in all documents.",
"Brunner et al.",
"(2020)",
"theoretically show that attention weights in transformers can be decomposed into two parts, from which the effective attention part corresponds to the attention that really affects the output.",
"Effective attention focuses on the effective input needed by the model for the task and is not biased by the representation of the input.",
"Kobayashi et al.",
"(2020)",
"extend the work of Brunner et al.",
"(2020), but focus on describing the effective attention part in more detail instead of using it to improve the model.",
"Likewise, Sun and Marasovic",
"(2021)",
"also extend the work of Brunner et al.",
"(2020)",
"and delve deeper into the explanation of effective attention and its use for explaining the model.",
"Sun and Lu",
"(2020)",
"study attention through two specific scores: attention and polarization.",
"The attention score corresponds to the absolute value associated with each input token before the transformation into an attention weight.",
"The polarization score is a global score",
"(not instance-specific)",
"for each input token, indicating its importance for predicting the positive or negative class.",
"The authors show through these two scores why attention-based models are stable in their prediction, even when attention weights differ.",
"They also show that the match between attention and polarizing scores strongly depends on the hyperparameter values.",
"By analyzing the effect of regularization on attention, Tutek and najder",
"(2020)",
"show that one of the reasons why attention cannot be used as a faithful explanation is due to the fact that all input tokens roughly have the same influence on the prediction.",
"The authors show that regularizing attention-based models so that embedded tokens e t better correspond to their hidden representation rnn",
"( e t )",
"produces explanations that are more faithful to the model.",
"However, Meister et al.",
"(2021)",
"show that regularizing generally decreases the correlation between attention and explanation techniques, if the regularization is directed towards sparse attention weights.",
"The authors conclude that sparsity, which is often viewed as increasing interpretability of models in the literature, in this case reduces the faithfulness of explanations.",
"Another way to analyze the problem is to study the change in the representation of the meaning of a sentence when",
"(i)",
"an attention layer is added, and when",
"(ii)",
"the type of RNN encoding the input is changed",
"(Zhang et al., 2021).",
"The authors show that, in addition to an increase in accuracy, the use of attention also makes the model more stable in terms of representation of sentence meanings.",
"In this section, we introduce arguments from the literature that claim that, despite some proofs that attention is not always explanation, attention can be explanation on certain NLP tasks.",
"In general, attention mechanisms seem to provide faithful explanations in syntax-related tasks such as part-of-speech tagging and syntactic annotation.",
"Clark et al.",
"(2019)",
"thus investigate the attention heads in BERT in the context of syntactic dependency tagging and co-reference resolution.",
"They find that attention heads at different layers attend to different kinds of information",
"(e.g., direct objects of verbs, determiners of nouns or referential antecedents), with earlier layers having a broader attention span.",
"Furthermore, attention heads in the same layer tend to show similar distributions, which is a counter to the argument of Li et al.",
"(2018)",
"on the fact that encouraging attention heads to learn different distributions within layers can improve performance.",
"Overall, knowledge of syntax seems to be encoded by a variety of attention heads in different layers, and thus attention can be used as a global explanation for the tasks under investigation.",
"Similarly, Vig and Belinkov",
"(2019)",
"investigate attention in GPT-2, in particular for part-of-speech and syntactic tagging.",
"They find that each part-of-speech is attended to by a specific subset of attention heads, and that attention heads in adjacent layers attend to similar part-of-speech tags.",
"In general, attention shows which tokens were attended 3893 to for the task at hand and can thus be used as a global explanation.",
"Clark et al.",
"(2019)",
"and Vig and Belinkov",
"(2019)",
"are some of the few works analyzing attention as explanation in a multi-head setting.",
"Additional work is needed to establish the similarities and differences between single and multiple heads in the context of the debate.",
"In a different vein, Vashishth et al.",
"(2019)",
"investigate the role of attention across a variety of NLP tasks.",
"They show that, when the input consists of a single sequence",
"(e.g., in sentiment classification), the attention mechanism is comparable to a gating unit and, as such, the learned weights cannot be interpreted as attention.",
"Therefore, in this context, attention does not provide an explanation of the model's reasoning.",
"The reduction of attention to gating units however does not hold true for self-attention networks nor for tasks depending on an additional text sequence, as for example in neural machine translation or natural language inference",
"(pair-wise tasks and text generation tasks).",
"In such cases, altering learned attention weights signifi-cantly degrades performance and attention appears to be an explanation of the model and to correlate with feature importance measures.",
"This section focuses on critics of the methodology when evaluating explanations via attention.",
"The critics mainly focus on two points in the evaluation setup of Jain and Wallace",
"(2019).",
"First, Jain and Wallace",
"(2019)",
"claim that there should be a consistency between attention weights and other explanation methods which Wiegreffe and Pinter",
"(2019)",
"agree with and find none.",
"Second, they state that the fact that attention could offer different explanations",
"(which they show by shuffling the attention weights)",
"is an issue, which is a strong point of disagreement with Wiegreffe and Pinter",
"(2019).",
"Regarding the first point, Neely et al.",
"(2021)",
"compare explanation methods from the literature",
"(LIME, Integrated Gradients, DeepLIFT, Grad-SHAP and Deep-SHAP)",
"with attention-based explanations.",
"The comparison is performed on two types of classification: single-sequence classification",
"(sentiment classification)",
"and pair-sequence classification",
"(language inference and understanding, and question answering).",
"The authors find slight agreement between the different explanation methods, including attention-based explanations.",
"They conclude that checking for consistency between explanation methods should not be a criterion for evaluation, which goes against the agreement between the two seminal papers.",
"The second point on shuffling the attention weights is a subject of more discussion.",
"Ju et al.",
"(2021)",
"propose a general discussion about logic traps in evaluating interpretation.",
"Their take on this point of the debate is that a model with its manipulated attention weights in the work of Jain and Wallace",
"(2019)",
"cannot even be regarded as a trained model, which makes their manipulation meaningless (Ju et al., 2021, p. 4), which adds to the point made by Wiegreffe and Pinter (2019).",
"Liu et al. (2020) argue that it is too early for the debate to take place because there are no good definition and evaluation of explanations.",
"The authors propose a Definition Driven Pipeline (DDP) to evaluate explanations based on the definition of faithfulness.",
"They show that following this DDP can produce an evaluation of explanations that is less biased and can even drive the development of new faithful explanations.",
"Calling for more clearly differentiating between faithfulness and plausibility when evaluating explanation, Jacovi and Goldberg (2020) define five guidelines for evaluating faithfulness, building upon the common pitfalls and sub-optimal practices they observed in the literature.",
"They propose an organization of the literature into three types: model assumption, prediction assumption, and linearity assumption.",
"They state that the distinction between Jain and Wallace (2019) and Wiegreffe and Pinter (2019) is the underlying assumptions they use for evaluating attention heat-maps as explanations.",
"The former attempts to provide different explanations of similar decisions per instance (therefore linked to prediction assumption ).",
"The latter critiques the former and is more anchored in the model assumption type of work.",
"The notion of plausibility of attention-based explanations implies asking humans to evaluate whether attention provides a plausible explanation for the model's decisions.",
"A first issue is whether human judges can agree on what plausible explanations of a decision (e.g., a prediction) are.",
"In an experiment involving predictions for sentiment analysis and reading comprehension, Vashishth et al. (2019) ask humans to decide whether the top 3 highest 3894 weighted words in 200 samples are relevant for the model's prediction.",
"They reported a very high agreement among judges (i.e., Cohen's over 0 . 8 ), which leads to think that words receiving the highest attention can form a plausible explanation.",
"A second interesting issue is the type of human annotations that should be captured in order to assess model's plausibility.",
"The most common approach is to ask humans to assess attention heatmaps produced by a model.",
"In Vashishth et al. (2019), users assess the relevance of the top 3 highest weighted words, whereas Mohankumar et al. (2020) ask evaluators to decide which of two attention heatmaps better explains the model's prediction as regards to three dimensions: overall prediction, completeness (which heatmap highlights all the words required for the prediction) and correctness (highlights only the important words and not unnecessary words).",
"Another way to assess the difference between human and machine attention, in Sen et al. (2020), consists in asking humans to highlight important words for a classification task.",
"The authors report an agreement percentage around 70% for this task and show that attention weights on top of bi-RNNs align pretty well with human attention.",
"This finding is especially true for words for which annotators agree on the importance.",
"A third line of research (Sood et al., 2020) uses eye tracking measures to investigate whether machine attention match human attention.",
"The authors hypothesize that machine attention distributions should correlate with human attention strategies for a given task (e.g., question answering).",
"They found that human and machine attention distributions are more similar on easier tasks, which may mean that, for difficult tasks, humans required more varied strategies.",
"For LSTMs and CNNs, diverging more from human attention leads to a drop in performance, which is not the case for XLNets.",
"However, the fact that humans could reliably assess model's plausibility does not ensure that the model is faithful (Jacovi and Goldberg, 2020).",
"In fact, Pruthi et al. (2020) cast serious doubts on using attention maps as a way for users to audit explanations in the context of fairness.",
"More precisely, the authors train various architectures of neural network models on datasets that are all gender-biased and whose predictions heavily rely on impermis-sible tokens (e.g., pronouns).",
"An adapted loss function is used to penalize the attention values of these impermissible tokens.",
"The authors conclude that, although the problematic tokens are still used by the models, they do not appear in the attention map, which wrongly leads users to believe that the models are unbiased.",
"In other words, the authors proved that a plausible explanation does not always imply that the explanation is faithful.",
"This section proposes an overview of the different solutions that have been developed to tackle the various challenges raised by the debate.",
"We identify two types of solutions: the first type, presented in Section 9.1, concerns purely technical solutions that are often based on the theoretical and empirical analyses presented in Section 5.",
"The second type of solutions, presented in Section 9.2, leverages user-in-the-loop strategies to align machine attention with human attention.",
"The technical solutions developed to make attention an explanation differ by whether they use attention values directly or indirectly.",
"Within a recurrent network, the representation of an input element contains a summary of the components of its context.",
"As such, the attention weight computed for that element is imprecise because it indirectly focuses on the context.",
"In order to avoid this dispersion, some researchers seek to reinforce the link between attention weights and input elements.",
"Chrysostomou and Aletras (2021) propose a weighted representation c of input elements h i using the attention weights i and scores s i that are specific to the elements themselves: c = (cid:80) i h i i s i .",
"They propose three learning strategies for that score (Linear TaSk, Feature-wise TaSk and Convolutional TaSk) and compare their solutions to three baseline explanations methods (Word Omission, InputXGrad and Integrated Gradients).",
"Their results show that their solutions are an improvement over the baselines.",
"Mohankumar et al. (2020) propose the introduction of more diversity in the hidden states learned by LSTMs, enabling the observation of elements separately from their context.",
"They evaluate two different strategies in their paper: orthogonalization and diversity driven training.",
"The first strategy imposes a constraint of orthogonality on the hidden states, while in the second strategy, the model learns to consider the hidden states sepa-3895 rately thanks to an additional term in the objective function.",
"The authors show that the resulting attention values offer explanations that are not only more faithful, but also more plausible.",
"Tutek and najder (2020) explore different hidden state regularization methods in order to preserve a strong link with the corresponding input elements.",
"They propose a regularization scheme that positively impacts the attention weights by reinforcing their link with the model prediction, which, in turn, leads to more faithful explanations.",
"The above approaches rely on a property of recurrent networks and seek to work on the attention by modifying the representation of the input elements within the network.",
"In parallel, some researchers focus directly on the attention weights.",
"Moradi et al. (2021) modify the loss function by adding a term that penalizes non-faithful attention.",
"In order to quantify faithfulness, they propose a measure that combines three different stress tests: ZeroOutMax, Uniform and RandomPermute.",
"They show that their method optimizes faithfulness, while improving the model's performance.",
"Bai et al. (2021) propose to weight the elements of the input X to counter the effect of combinatorial shortcuts (see Section 5).",
"The weighting scheme is based on the fact that when estimating E (Y|X (cid:12) M) in attention, where M are masks applied ( (cid:12) ) to the elements of the input X, the choice of masks M is biased by X and Y because of the key and query elements when computing attention.",
"The authors therefore weights the instances by w = P ( y ) P ( y | m ) to disconnect m from y , and, in turn, to encourage m to select meaningful elements of x to predict y .",
"Another way to make attention become explanation is to bring users into the loop.",
"This approach is sometimes called supervised attention, as the user attention is used by the model during training.",
"Strout et al. (2019) show that using human rationale to supervise attention can produce explanations that are better accepted by users, but can also lead to better results in terms of performance.",
"Zhong et al. (2019) modify an attention-based LSTM to make it match user provided attention.",
"In order to do that, they compare the distributions of machine and user attention and use a Kull-backLeibler divergence between the two distributions to penalize the attention of the model.",
"In the same idea of supervised attention, Heo et al. (2020) extend the meta-learning technique called neural processes to include attention.",
"Their Neural Attention Processes (NAP) are designed to consider user-provided attention in an active learning fashion through the use of context points.",
"Kanchinadam et al. (2020) also extend the training of attention to obtain a supervised version of attention.",
"Their approach consists in the addition of a term in the objective function of their model to penalize the difference between the machine and the user attention.",
"As in Heo et al. (2020), the authors make use of active learning in their method called Rationale-based Active Learning with Supervised Attention (RALSA) to collect user attention.",
"Finally, Arous et al. (2021) introduce MApping human Rationales To Attention (MARTA), a Bayesian framework to include human rationale in order to adapt machine attention.",
"As for all other works in this section, the method improves the performance of the model while providing human-understandable explanations.",
"As stated earlier in this paper, one of the difficulties in this debate is that the insights are brought from papers of different areas that do not always cite each other.",
"In fact, even inside a particular area, papers do not always refer to each other.",
"In this section, we aim at bridging the gap between the different papers and their area in order to extract the main conclusions and some points of tension.",
"First of all, like Thorne et al. (2019) who state that LIME can be used for explanation, thus questioning the need for attention, Bastings and Fil-ippova (2020) state that saliency methods can be used for explanation, removing the need for attention.",
"Therefore, according to Bastings and Filip-pova (2020), if explanation tools already exist, why is the debate about attention useful?",
"Two answers can be provided to this question.",
"First, attention is something that is learned for performance purposes, so it would be useful if it could be used as explanation also, instead of using additional post-hoc tools.",
"Second, the existence of the debate kick-started solutions that are now moving towards explanation.",
"Solutions for making attention explanation should consider the two sides of explanation: faithfulness and plausibility.",
"This subject is at the heart of the debate, as Wiegreffe and Pinter (2019) already mentioned the focus of Jain and Wallace 3896 (2019) on faithful explanations only.",
"Indeed, users may not be satisfied by explanations that are only faithful, as they need to be plausible for them too.",
"The right balance between plausibility and faithfulness may lie in human-based evaluations (Sec-tion 8) and supervised attention (Section 9.2).",
"That being said, faithfulness should also be evaluated on its own right, without any consideration of plausibility, to check if the explanation matches the model behavior.",
"However, as explained by Jacovi and Goldberg (2020), faithfulness should not be evaluated in a binary fashion: the level of faithfulness needed for attention to be accepted as an explanation should be measured.",
"Furthermore, the faithfulness of attention is generally evaluated with gradient-based techniques, and other techniques like LIME, as a ground truth.",
"However, several works show that these techniques can lead to unexpected (and potentially misleading) results (Feng et al., 2018; Slack et al., 2020).",
"As human-based evaluations are used to assess the plausibility of explanations, and cannot be used for assessing faithfulness (Jacovi and Goldberg, 2020), the question of how to evaluate faithfulness is still open.",
"Still on the subject of evaluation, we noted that the different contributions to the debate are often based on different setups (as outlined by Table 1).",
"Indeed, except for the analysis of attention on different tasks (Section 6), the contributions often base their claims on one or two tasks of their choice.",
"The same issue has been observed with the use of different input embeddings and different architectures surrounding the attention layer(s).",
"However, authors like Liu et al. (2020) stress that the lack of a common ground when discussing faithfulness, plausibility and explanations is not conducive to finding answers to the debate.",
"On the side of solutions, the common intuitive solution in interpretability and explanation that regularizing a model to be sparse improves our understanding of the model is not well supported in the literature for attention.",
"In fact, some authors like Meister et al. (2021) note that inducing sparsity may in fact reduce the faithfulness of attention.",
"Another perspective that is better suited for obtaining faithful explanations is effective attention (Brunner et al., 2020; Kobayashi et al., 2020; Sun and Marasovic, 2021).",
"Indeed, while attention per se may not be explanation, further studies and uses of effective attention as a sub-part of attention may prove useful to learn a faithful explanation.",
"If plausible explanations, alongside faithfulness, are needed, supervised attention is a good perspective.",
"The argument for supervised attention is wellfounded: if attention is not explanation and if faithfulness is not enough, then making machine attention match human attention may be a solution.",
"While one can argue that attention has originally been introduced for performance purposes and that supervised attention may work against this advantage, several studies show that, in fact, guiding attention increases performance (e.g., Strout et al. (2019)).",
"Supervised attention is therefore a solution that both optimizes performance and explainability.",
"The main cost of this solution is that it requires the participation of users, but solutions can handle few-shot user annotations (e.g., Heo et al. (2020)).",
"Grimsley et al. (2020) offer a philosophical perspective on the debate.",
"They show that works studying attention as explanation do so in a causal framework.",
"They argue that it is an issue because the object of study does not fit in that type of framework.",
"The reason is that the link between the attention layer and the model's output cannot be isolated from the other components of the model.",
"They conclude that attention weights alone cannot be used as causal explanation for model behavior (Grims-ley et al., 2020, p. 1786).",
"This entails that assuming causality when evaluating the explanatory power of attention is doomed to fail by design.",
"The authors propose non-causal explanation paradigms to explore the issue, such as mathematical, structural modal, and minimal-model explanations.",
"We have shown that the debate about the question is attention explanation? already produced a vast and diverse literature.",
"Throughout our analysis, we highlighted various insights that can help advance the debate: theoretically refining concepts around the notion of explanation (in particular plausibility and faithfulness), developing a common ground in the evaluation setup (e.g., similar input embeddings and architectures), extending the studies and uses of effective attention, and improving the integration of users for a supervised attention.",
"We intend that our work provides a solid ground for further research, calling for more integration to answer the question is attention explanation? .",
"In particular, combining the findings from the different areas (e.g., to produce a supervised effective attention) seems to be among the most promising avenues.",
"This research Walloon region with",
"benefited from the support of the a Win2Wal funding.",
"Alana de Santana Correia and Esther Luna Colombini.",
"2021.",
"Attention, please!",
"A survey of neural attention models in deep learning.",
"arXiv:2103.16775 .",
"Kawin Ethayarajh and Dan Jurafsky.",
"2021.",
"Attention flows are shapley value explanations.",
"In Proceedings of ACL-IJCNLP .",
"Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber.",
"2018.",
"Pathologies of neural models make interpretations difficult.",
"In Proceedings of EMNLP , pages 3719 3728.",
"Andrea Galassi, Marco Lippi, and Paolo Torroni.",
"2020.",
"Attention in natural language processing.",
"IEEE Transactions on Neural Networks and Learning Systems , 32(10):42914308.",
"Christopher Grimsley, Elijah Mayfield, and Julia RS Bursten.",
"2020.",
"Why attention is not explanation: Surgical intervention and causal reasoning about neural models.",
"In Proceedings of LREC , pages 17801790.",
"Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi.",
"2018.",
"A survey of methods for explaining black box models.",
"ACM Computing Surveys , 51(5):142.",
"Jay Heo, Junhyeon Park, Hyewon Jeong, Kwang Joon Kim, Juho Lee, Eunho Yang, and Sung Ju Hwang.",
"2020.",
"Cost-effective interactive attention learning with neural attention processes.",
"In Proceedings of ICML , pages 42284238.",
"Alon Jacovi and Yoav Goldberg.",
"2020.",
"Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?",
"In Proceedings of ACL , pages 41984205.",
"Sarthak Jain and Byron C Wallace.",
"2019.",
"Attention is not explanation.",
"In Proceedings of NAACL-HLT , pages 35433556.",
"Yiming Ju, Yuanzhe Zhang, Zhao Yang, Zhongtao Jiang, Kang Liu, and Jun Zhao.",
"2021.",
"The logic traps in evaluating post-hoc interpretations.",
"arXiv:2109.05463 .",
"Teja Kanchinadam, Keith Westpfahl, Qian You, and Glenn Fung.",
"2020.",
"Rationale-based human-in-the-loop via supervised attention.",
"In Proceedings of the KDD workshop DaSH .",
"Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui.",
"2020.",
"Attention is not only a weight: Analyzing transformers with vector norms.",
"In Proceedings of EMNLP , pages 70577075."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"other",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"For languages without natural word boundaries, like Japanese and Chinese, word segmentation is a prerequisite for downstream analysis.",
"For Japanese, segmentation is often done jointly with part of speech tagging, and this process is usually referred to as morphological analysis.",
"Morphological analyzers are trained on data hand-annotated with segmentation boundaries and part of speech tags.",
"A segmentation dictionary or character n-gram information is also provided as additional inputs to the model.",
"Incorporating this extra information makes models large.",
"Modern neural morphological analyzers can consume gigabytes of memory.",
"We propose a compact alternative to these cumbersome approaches which do not rely on any externally provided n-gram or word representations.",
"The model uses only unigram character embeddings, encodes them using either stacked bi-LSTM or a self-attention network, and independently infers both segmentation and part of speech information.",
"The model is trained in an end-to-end and semi-supervised fashion, on labels produced by a state-of-the-art analyzer.",
"We demonstrate that the proposed technique rivals performance of a previous dictionary-based state-of-the-art approach and can even surpass it when training with the combination of human-annotated and automatically-annotated data.",
"Our model itself is significantly smaller than the dictionary-based one: it uses less than 15 megabytes of space.",
"Languages with a continuous script, like Japanese and Chinese, do not have natural word boundaries in most cases.",
"Natural language processing for such languages requires to perform some variation of word segmentation.",
"Although some NLP applications, like neural machine translation, started to use unsuper... Encoder ... ... ... ... ... ... ...",
"vised segmentation methods (Kudo and Richardson, 2018), resulting segmentation often has decisions which are not natural to humans.",
"Supervised segmentation based on a human-defined standard is essential for applications which are designed for interaction on a word-level granularity, for example, full-text search.",
"Segmentation is commonly done jointly with part of speech ( POS ) tagging and usually referred to as Morphological Analysis.",
"Modern Japanese Morphological Analyzers ( MA ) are very accurate, having a >99 segmentation tokenwise F1 score on news domain and a >98.5 F1 on web domain (Tolmachev et al., 2018).",
"They often use segmentation dictionaries which define possible words.",
"Also, their models are generally large and unwieldy, spanning hundreds of megabytes in case of traditional symbolic feature-based approaches.",
"Neural models with word or n-gram embeddings are even larger, easily reaching gigabytes.",
"This makes it difficult to deploy MA in space-constrained environments such as mobile applications and browsers.",
"It has been shown that simple or straightforward models can match or outperform complex models when using a large number of training data.",
"For example, a straightforward backoff technique rivals a complicated smoothing technique for language models (Brants et al., 2007).",
"Pretraining a bidirectional language model on a large dataset helps to solve a variety of NLP tasks (Devlin et al., 2018).",
"Our approach is inspired by this line of work.",
"Contributions We propose a very straightforward fully-neural morphological analyzer which uses only character unigrams as its input 1 .",
"Such an analyzer, when trained only on human-annotated gold data has low accuracy.",
"However, when trained on a large amount of automatically tagged silver data, the analyzer rivals and even outperforms, albeit slightly, the bootstrapping analyzer.",
"We conclude that there is no need for rich input representation.",
"Neural networks learn the information to combine characters into words by themselves when given enough data.",
"Ignoring explicit dictionary information and rich input representations makes it possible to make analyzers that are highly accurate and very compact at the same time.",
"We also perform ablation experiments which show that the encoder component of such an analyzer is more important than character embeddings.",
"Segmentation is a cornerstone requirement for processing languages with a continuous script, and thus it has been studied for a long time.",
"Most current approaches use either rich feature representation, e.g. character n-grams or their embeddings, or a segmentation dictionary.",
"There exist two main lines of approaches: pointwise and search-based.",
"Pointwise approaches make a segmentation decision for each character, usually based on the information from its surroundings.",
"Search-based approaches look for a maximum scored interpretation in some structure over the input sentence.",
"Most Japanese analyzers use segmentation dictionaries which define corpus segmentation standards.",
"They usually have rich POS information at-1 The source code is avaliable at https://github.com/ eiennohito/rakkyo tached and are human-curated.",
"One focus of segmentation dictionaries is to be consistent: it should be possible to segment a sentence using the dictionary entries only in a single correct way.",
"Such dictionaries are often maintained together with annotated corpora.",
"On the other hand, Chinese-focused systems do not put much focus on dictionaries.",
"Still, almost all aproaches use rich feature templates or additional resources such as pretrained character n-gram or word embeddings, which in-crease the model size.",
"Pointwise approaches make a segmentation decision independently for each position.",
"They can be seen as a sequence tagging task.",
"Such approaches are more popular for Chinese.",
"KyTea (Neubig et al., 2011) is an example of this approach in Japanese.",
"It makes a binary decision for each character: whether to insert a boundary before it or not.",
"It can be seen as sequence tagging with {B, I} tagset.",
"POS tagging is done after inferring segmentation.",
"The decisions are made by feature-based approaches, using characters, character n-grams, character type information, and dictionary information as features.",
"KyTea can use word features obtained from a dictionary.",
"It checks whether the character sequence before and after the current character forms a word from the dictionary.",
"It also checks whether the current word is inside a word.",
"Neural networks were shown to be useful for Japanese in this paradigm as well (Kitagawa and Komachi, 2017).",
"They use character embeddings, character type embeddings, character n-gram embeddings, and tricks to incorporate dictionary information into the model.",
"Many studies on Chinese adopt the pointwise approach.",
"Often, the segmentation task is reformulated as sequence tagging (Xue, 2003) with {B, I, E, S} tagset.",
"Peng et al. (2004) showed that CRFs help further in this task.",
"This tactic was followed by many subsequent feature-based approaches (Tseng et al., 2005; Zhao et al., 2006; Zhang et al., 2013), using character n-gram, character type and word features.",
"Neural networks were applied to this paradigm as well.",
"Zheng et al. (2013) used a feed-forward network on character and categorical features that were shown to be useful for computing a segmentation score from a fixed window.",
"Qi et al. (2014) used a similar architecture.",
"They predicted not only segmentation but POS tags and performed named entity recognition as well.",
"The character representation was pretrained on a language modeling task.",
"Shao et al. (2017) used a bidirectional recurrent network with GRU cells followed by a CRF layer for joint segmentation and POS tagging.",
"They used pretrained character n-gram embeddings together with sub-character level information extracted by CNNs as features.",
"Using a dictionary with NN is also popular (Zhang et al., 2018b; Liu et al., 2018).",
"Search-based approaches induce a structure over a sentence and perform a search over it.",
"A most frequently used structure is a lattice which contains all possible segmentation tokens.",
"The search then finds the highest scoring path through the lattice.",
"Another branch of search-based approaches splits decisions into transitions (starting a new token and appending a character to the token) and searches for the highest scoring chain of transitions.",
"This also can be seen as dynamically constructing a lattice while performing the search in it at the same time.",
"Lattice-based approaches are popular for the Japanese language.",
"Most of the time, the lattice is based on words which are present in a segmentation dictionary and a rule-based component for handling out-of-dictionary words.",
"Usually, there are no machine-learning components in lattice cre-ation, but the scoring can be machine-learning based.",
"We believe that the availability of high quality consistent morphological analysis dictionaries is the reason for that.",
"Still, the work of Kaji and Kitsuregawa (2013) is a counterexample of a lattice-based approach for Japanese which uses a machine-learning component for creating the lattice.",
"Traditional lattice-based approaches for Japanese use mostly POS tags or other hidden information accessible from the dictionary to score paths through the lattice.",
"JUMAN (Kuro-hashi, 1994) is one of the first analyzers, which uses a hidden Markov model with manually-tuned weights for scoring.",
"Lattice path scores are computed using connection weights for each pair of part of speech tags.",
"Probably the most known and used morphological analyzer for Japanese is MeCab (Kudo et al., 2004), where CRFs were used for learning the scoring.",
"MeCab is very fast: it can analyze almost 50k sentences per second.",
"It also achieves acceptable accuracy, and so the tool is very popular.",
"The speed is realized by precomputing feature weights, but it takes a lot of space when the total number of features gets large.",
"For example, the UniDic model for modern Japanese v2.3.0 2 takes 5.5GB because it uses many feature templates.",
"There were studies which tried to integrate NN into lattice-based approaches as well.",
"Juman++ (Morita et al., 2015) uses dictionary-based lattice construction with the combination of two models for path scoring: the feature-based linear model using soft-confidence weighted learning (Wang et al., 2016) and a recurrent neural network (Mikolov, 2012).",
"It significantly reduced the number of both segmentation and POS tagging errors.",
"However, it was very slow, being able to analyze only about 15 sentences per second, hence the original version was impractical.",
"The following improvement (Tolmachev et al., 2018) greatly increased analysis speed by doing aggressive beam trimming and performing heavyweight NN evaluation only after lightweight scoring by the linear model.",
"Direct lattice-based approaches are not very popular for Chinese, but some are lattice-based in spirit.",
"A line of work by Zhang and Clark (2008, 2010) builds the lattice dynamically from partial words, searching paths with a perceptron-based scorer and customized beam search.",
"The dictionary is built dynamically from the training data as frequent word-tag pairs which help the system to prune unlikely POS tags for word candidates.",
"One more variation on lattice-based approaches for Chinese is the work by Cai and Zhao (2016).",
"In this work, a segmentation dictionary is used to construct a subnetwork, which combines character representations into word representations used for computing sentence-wise segmentation scores.",
"This can be seen as explicitly learning dictionary information by a model.",
"Resulting segmentation is still created from the start to the end by growing words one by one while performing beam search.",
"The follow up (Cai et al., 2017) simplifies that model and shows that greedy search can be enough for estimating segmentation when using neural networks.",
"Still, this line of work does not consider POS tagging.",
"Transition-based approaches treat input data (most frequently characters) as input queue and store a current, possibly incomplete, token in a buffer.",
"Models usually infer whether they should create a new token from a character in the input 2 https://unidic.ninjal.ac.jp/ queue or append an input character to the already existing token.",
"Neural models are often used in this paradigm (Ma and Hinrichs, 2015; Zhang et al., 2016; Yang et al., 2017; Ma et al., 2018; Zhang et al., 2018a).",
"Almost all of them use both word and charcter n-gram embeddings.",
"This paradigm was extended to do parsing jointly with MA (Ha-tori et al., 2012; Kurita et al., 2017).",
"Semi-supervised approaches to segmentation and POS tagging fall into several categories.",
"The first one uses raw or automatically-annotated data to precompute feature representations and then uses these feature representations for supervised learning.",
"For example, Sun and Xu (2011) and Wang et al. (2011) use data from automatically segmented texts as features.",
"They precomute the features beforehand and train an analyzer afterwards.",
"In addition to that, Zhang et al. (2013) use a variation of smoothing for handling automatic annotation errors.",
"A lot of neural-based methods pretrain word and character n-gram embeddings.",
"Yang et al. (2017) pretrain a part of the model on different data sources, including automatically segmented text, but the model itself is trained only on the gold data.",
"Another approach is to use heterogeneous data (annotated in incompatible annotation standards).",
"In addition to corpus statistics from a raw corpus, Zhao and Kit (2008) exploit heterogeneous annotations.",
"Li et al. (2015) use corpora with different annotation standards.",
"They combine tags into bun-dles (e.g. [NN, n]) and infer them at the same time while paying attention to ambiguity.",
"Chen et al. (2016) train a classifier that can annotate several standards jointly.",
"Finally, it is possible to use raw or automatically-annotated data directly.",
"A study (Suzuki and Isozaki, 2008) is an example of a feature-based algorithm which uses raw data.",
"Tri-training (Zhou and Li, 2005) is a generic way to use raw data.",
"They propose to train on automatically analyzed examples where two of three diverse analyzers agree.",
"Sgaard (2010) show that tri-training helps English POS-tagging with SVM and MaxEnt-based approaches.",
"Zhou et al. (2017) use self-training and tri-training for Chinese word segmentation.",
"They, however, also pretrain other features like word-context character embeddings, chrarac-ter unigrams and bigrams.",
"In order for MA to be practical, it should be not only accurate, but also fast and have relatively compact models.",
"The speed of search-based approaches is dependent on how computationally heavy a weighting function is.",
"Heavyweight models, like neural networks, require a large number of computations, and we think that it will be very difficult to create a practical search-based fully NN morphological analyzer with analysis speed comparable to traditional analyzers.",
"We do not want to use any explicit information about how to combine characters to form a word, like dictionaries, which takes space and is not trivial to incorporate into a character-based model.",
"We also want our model to be fast, at least comparable with the speed of traditional analyzers.",
"To this end, we follow a pointwise approach and force the neural network to learn the dictionary information from a corpus.",
"We use a straightforward architecture shown in Figure 1. We embed each character, and then apply an encoder, which produces an encoded representation for each character.",
"Encoded character representations are independently transformed into tag representations.",
"For each tag, the encoded representation is projected with a fully-connected layer with SeLU non-linearity (Klambauer et al., 2017).",
"Finally, we multiply the tag representation by tag-specific embeddings and apply softmax non-linearity to get normalized tag probabilities.",
"Encoder Architectures We use two architectures for the encoder: a stacked bidirectional recurrent architecture with LSTM cells (Hochre-iter and Schmidhuber (1997), bi-LSTM ) and a Transofrmer-inspired mutihead self-attention network (Vaswani et al. (2017), SAN ).",
"We concatenate both directions of bi-LSTM outputs before passing them to the next layer without residual connections.",
"We also apply layer normalization (Ba et al., 2016) to the concatenated outputs.",
"We do not use dropout in encoders when using silver data for training.",
"Data Encoding Our model infers a tag for every input character.",
"While this decision is natural for segmentation, POS tags are not usually tagged in this way.",
"For segmentation, we adopt {B, I, E} scheme.",
"For POS tagging we broadcast tags to every character which is contained in a token.",
"We use corB * E * B * * B * E * B * * EOS Seg 4-layered POS Figure 2: An example of full sentence annotation B * ?",
"pora with the JUMAN-based segmentation standard ( Jumandic ), which has 4-layered POS tags: rough POS, fine POS, conjugation type and conjugation form.",
"We treat each tag layer independently in our model, as shown in Figure 2. We also consider a partial annotation scheme , where some tags are unknown.",
"An example of partial sentence annotation is shown in Figure 3. Unknown tags are displayed by ? symbols.",
"We create partially annotated silver data by marking as unknown all tags which are ambiguous in a top-k analysis result.",
"When computing the training loss, we treat unknown tags as padding: corresponding values are masked out of loss computation.",
"Loss Following Vaswani et al. (2017), we smooth softmax labels.",
"They use the technique described by Szegedy et al. (2016), which uniformly distributes some small factor (cid:15) like 0 .",
"1 to incorrect labels.",
"However, we do not induce a uniform smoothing.",
"Instead, we want to prevent the model from being overconfident in its decisions without inducing uniformity.",
"We slightly modify the cross-entropy loss as follows.",
"Remember that softmax probabilities are computed from unnormalized log-probabilities l i as q i = e l i / Z , where Z = (cid:80) j e l j .",
"The cross-entropy loss will be L = (cid:80) i p i log q i , where p i are gold probabilities.",
"In our case the vector p is one-hot, meaning that p c = 1 and other values are zero.",
"This gives a sparse cross-entropy L = log q c = log Z l c , which is often im-Train Test Corpus Sents Tokens Sents Tokens KU 37k 930k 1783 46k Leads 14k 217k 2195 36k Table 1: Benchmark corpora sizes plemented in deep learning frameworks.",
"It has a minimum when log Z is equal to l c , but it makes the model overconfident.",
"Instead, we want to stop when q c = 1 (cid:15) , or in other words e l c / Z = 1 (cid:15) .",
"This gives us our modified loss: L = max ( log Z l c + log (1 (cid:15) ) , 0) .",
"It can be efficiently implemented using the sparse cross-entropy operation.",
"In our experiments we use (cid:15) = 0 .",
"2 .",
"Our final loss is a weighted sum of individual tag softmax losses.",
"We use a weight coefficient of 10 for segmentation and 2 for the first POS tag layer.",
"We conduct experiments on Japanese morphological analysis.",
"For training we use two data sources.",
"The first is usual human-annotated gold training data.",
"The second is silver data from the results of automatic analysis.",
"We use Juman++ V2 the current state-of-the-art analyzer for the JUMAN segmentation standard as the bootstrap analyzer.",
"We use two gold corpora.",
"The first is the Kyoto University Text Corpus (Kurohashi and Nagao (2003), referred to as KU ), containing newspaper data.",
"The second is the Kyoto University Web Document Leads Corpus (Hangyo et al. (2012), referred to as Leads ) which consists of web documents.",
"Corpus statistics are shown in Table 1. We denote models which use gold training data by G .",
"We take raw data to generate our silver annotated data from a crawled web corpus of 9.8B unique sentences.",
"We sample 3B sentences randomly from it and analyze them using the Juman++ baseline model.",
"From it we sample 500M sentences, which become our training silver data, prioritizing sentences which contain at least one not very frequent word.",
"We prepare both top-scored (denoted as T ) and non-ambigous in beam (denoted as B ) variants of the silver data.",
"Our silver data is in-domain for Leads and out-of-domain for KU.",
"Baselines We use four baselines: JUMAN , MeCab , KyTea and Juman++ (V2).",
"For MeCab, Parameter bi-LSTM SAN Char embedding size 128 128 Tag embedding size 32 32 # Layers 4 6 Hidden Size 128 2 32 # Heads 4 Projection Inner Dim 512 # Emedding Parameters 2.38M 2.38M # Total Parameters 3.88M 3.59M Table 2: Hyperparameters for neural models KyTea and Juman++ we train a model using the same dictionary and merged training sections of KU and Leads, which is evaluated on each corpus independently.",
"Neural Models The hyper-parameters of the bi-LSTM-based model are displayed in Table 2. We use all unique characters present in our huge web corpus (18,581) as input.",
"We select sizes of both neural models restricting the total number of parameters to be less than 4M.",
"For optimization we use the Adam optimizer (Kingma and Ba, 2016) with hyperparameters and learning rate scheduling described by Vaswani et al. (2017).",
"We train all models on Nvidia GPUs.",
"On a single GeForce 1080Ti the bi-LSTM model can consume about 4,500 sentences per second and the SAN-based model about 6,500 sentences per second for training.",
"We denote bi-LSTM-based models by L and SAN-based models by S in experimental results.",
"Treatment of Gold Data Existing methods are already highly accurate on this task, and it is difficult to perform hyperparameter and architecture selection reliably with a small development set.",
"Because of that, we split our data in an unusual way.",
"Generally, we use the silver data (B or T) as a train set, the human-annotated original training data (G) as a dev set and the original test set as a test set.",
"Our hyperparameter selection decisions were based entirely on this setting.",
"We do not perform additional hyperparameter search for a combination of silver and gold data for training.",
"The exception is cases when we use only gold data for training.",
"For that, we cheat and optimize our hyperparameters, including dropout, which we use only for this setting, on test scores.",
"Nonthe-less, the best scores on this setting are significantly lower than the worst baseline.",
"Experimental Results Results of our experiments are shown in Table 3. For each analyzer, we show six values.",
"Seg is a tokenwise F1 measure on segmentation.",
"+P1 requires the 1st layer of POS tags (coarse-grained POS tags) also to match gold data.",
"For the sake of simplicity, we use only POS tags co-located with B Seg tags for the evaluation.",
"+P2 is analogous for the 2nd layer of POS tags.",
"For all results in this table, we train NN-based models for a single epoch, which means the training procedure sees each silver sentence only once .",
"We use one gold example for ten silver examples for mixed-data settings, looping over the gold data until the silver data is extinguished.",
"Training neural models only on gold data quickly results in overfitting which can be seen in L:G and S:G results.",
"These scores are significantly lower than that of our worst baseline: JUMAN.",
"Models trained on only non-ambiguous silver data (*:B) are comparable to the best baseline on Leads (in-domain), although they cannot reach the accuracy of Juman++ on KU.",
"Using top-only silver data (*:T) further improves accuracy.",
"Both of our models in this setting slightly outperform previous Leads SOTA and have more or less the same accuracy.",
"On KU, the LSTM-based model seems to be slightly better than the SAN-based one.",
"In the context of semi-supervised learning, tri-training emphasizes using data when there exists a disagreement between the analyzers.",
"Instead, we throw Size, MB Analyzer Dictionary Model Total JUMAN 288 1 289 MeCab 312 8 320 KyTea:G 569 569 KyTea:TG 3218 3218 Juman++ 157 288 434 bi-LSTM 1 14 15 SAN 1 13 14 Table 4: MA model sizes for Jumandic away difficult cases for beam-based data, denois-ing it in a sense, but NN seem to handle that kind of noise relatively well.",
"Adding the gold data to the silver data (*:BG, *:TG) allows both models to improve their accuracy further.",
"Results on Leads are comparable for both L:TG and S:TG and higher than the previous SOTA, giving segmentation error reduction of 8% in comparison to Juman++.",
"On KU, the LSTM-based models seem to perform better without a significant difference on the TG and BG settings, while still underperforming the Juman++ baseline except +P2 case, where both models are stronger than Juman++.",
"Pre-training Scenario We also check the fine-tuning approach when we first learn the representations on a large corpus and then refine the model on a gold corpus.",
"S:B G(a-d) are four such runs of a SAN-based model with different hyperparameters.",
"All four runs are initialized with the same S:B model and trained on the gold data only.",
"We found it difficult to find good hyperparameters for fine-tuning.",
"The models were prone to overfit very fast.",
"Mixing gold and silver data resulted in stable training without hyperparameter search.",
"Model Sizes We compare the model sizes of analyzers in Table 4. In case of dictionary-based analyzers the dictionary takes most of the space.",
"We count sizes of compiled models for all analyzers.",
"KyTea, as another example of pointwise MA, uses string-based features and treats its features uniformly, hence dictionary size is not applicable to it.",
"A KyTea:TG variant that uses additional 2M silver sentences takes almost 6x the space of the original model, reaching 3GB.",
"When using neural networks, on the other hand, it is possible to control model sizes more easily.",
"Moreover, our proposed Analyzer KU Leads KyTea-D:G 98.45 97.04 KyTea-D:T 98.51 98.10 KyTea-D:TG 99.18 98.31 KyTea:G 99.13 97.98 KyTea:TG 99.33 98.42 Table 5: KyTea test Seg F1 comparison.",
"Dictionary-based analyzers store other information, like readings and lemma forms, in addition to token surface forms and POS, but removing that information would not make model sizes comparable with NN-based ones.",
"For NN-based analyzers, we count a dictionary as 1 MB because they need a character-to-id mapping to work.",
"However, the list of characters contains non-frequently used characters, some of which could be treated as UNKs without any accuracy loss.",
"We also treat weights as 4-byte floating points, and so it would be possible to further decrease the NN model size, for example by using less precise storage formats.",
"Dictionaries Dictionary information is usually added to character-based models either using a binary feature vector (e.g. a dictionary contains a trigram to the left of the decision point) or word embeddings.",
"We believe that a dictionary can be replaced with a large training corpus which includes most of the entries from that dictionary.",
"A neural model with only the unigram character input can solve word segmentation and POS tagging only if it builds some knowledge about the dictionary internally.",
"Our main experimental results (Table 4) show that it seems to be the case and there is no need to model the dictionary explicitly.",
"Table 5 shows an effect of using dictionaries and silver data on KyTea, an instance of symbolic feature-based analyzer.",
"Models tagged with T use additional 2M silver training data analyzed by Ju-man++.",
"KyTea has better accuracy in settings when it uses the dictionary.",
"The dictionary even helps in the setting with additional silver data.",
"Unfortunately, the model size increases as well, limiting the amount of silver data we can use, and the accuracy cannot rival neural approaches.",
"How much data do we need?",
"For our main experiments, we train all models for a single epoch on our silver dataset.",
"Figure 4 shows KU train (our dev set) Seg F1 curves for L:B and S:B for three epochs.",
"We ran each experiment four times with different random seeds.",
"The learning curves become less sloppy when reaching 500M sentences but do not become flat there.",
"The training does not seem to completely converge even after 3 epochs.",
"We still use one full epoch (500M) for our main experiments.",
"The curves are pretty noisy, but it seems that the model is robust with respect to initialization.",
"SAN Ablation Experiments The proposed MA achieves high accuracy while having very compact models.",
"The inputs do not contain any information on how to combine characters into words and we assume that the model learns it from the data.",
"To get the model size even smaller, we check which model parts contribute more to the resulting analysis accuracy, meaning that they contain the dictionary knowledge.",
"We perform ablation experiments on the SAN model by varying its hyperparameters and checking how it affects the accuracy of the resulting analyzer.",
"The LSTM model could not converge in this setting.",
"We used 2.5M of silver training data for these experiments.",
"Figure 5 shows the segmentation F1 score when varying input embedding, shared representation and SAN hidden dimension sizes.",
"JUMAN score, as a lowest acceptable baseline, is shown in red.",
"The embedding size seems to have a lower impact on accuracy than the shared representation and the SAN hidden dimension size.",
"Namely, the (128-16) model with the embedding size of 16 has higher accuracy than the (128-4) model with the embed-16 32 64 128 Embedding dimension 97.0 97.5 98.0 98.5 99.0 99.5 Seg F1 Shr.dim hid.dim 64-4 64-16 128-4 128-16 256-4 256-16 Figure 5: Effect of embedding size on Seg F1 0 2 4 6 8 10 12 14 16 60 70 80 90 100 SAN projection dimension 0 32 64 0 2 4 6 8 10 12 14 16 SAN hidden dimension 97.0 97.5 98.0 98.5 99.0 99.5 Figure 6: Effect of SAN hidden dimension on Seg F1 ding size of 128.",
"Accordingly, we believe that the encoder contributes much stronger to learning the dictionary than character embeddings.",
"One more interesting observation is that the models are still better than JUMAN, while having much less parameters than our base model.",
"We explore more extreme settings of the SAN hidden state, shown in Figure 6.",
"We fix embedding and shared representation dimensions to 128 and vary the SAN hidden and projection dimensions.",
"The lower subgraph is a scale-up version of top graph.",
"The point at SAN hidden size equal to 0 means that we directly use unigram embeddings to predict segmentation without any encoder.",
"The SAN projection size is consistent with accuracy, especially on smaller SAN hidden sizes.",
"An interesting observation here is that the SAN model seems to work even with hidden dimension of 2. When the hidden dimension size reaches 4, the extremely small model accuracy is higher than the JUMAN baseline.",
"This shows that it is possible to create an extremely small MA with acceptable accuracy.",
"Label Uncertainty and Error Analysis Because our neural models infer all tags independently, they can be inconsistent, for example, a word can have different POS tags on different characters.",
"We looked into frequent 3-grams where the central word has inconsistent tags (POS tags are not the same for all characters, or they do not form a correct 4-layered tag).",
"Most of these trigrams occur in ambiguous situations.",
"We have picked several examples which are actually errors in Juman++ segmentation as well.",
"They are shown in Table 6.",
"In Japanese, words often have several orthographic forms.",
"The most common variant is usage of hiragana (phonetic script) instead of kanji (ideographic characters).",
"Verbs can have different possible endings, e.g. and (magaru to turn or bend) are two orthographic variants of a single verb.",
"There are also colloquial variants; namely the verb is usually read as (iu to say), but can also be written as because the pronunciation is close.",
"These phenomena are relatively common in web and user-generated texts, but corpus and segmentation dictionary coverage of them is not very good.",
"The first two examples contain alternative colloquial spellings of words (ko:iu such) and (sugoi awesome).",
"In the first example the system incorrectly recognizes | (kyoku tte) as (magatte) a conjugation of .",
"The fourth example (a chanto asobitai/ac-chan to asobitai ah! [I] want to play properly/[I] want to play with ac-chan <person name>) is actually ambiguous and can have two meanings.",
"The second one is more probable though.",
"The fact that frequent words with uncertain POS tags are Juman++ errors as well implies that insufficient gold data causes the uncertainty.",
"We also compare differences between Juman++ and our models to get an insight on general problems with proposed methods.",
"Neural models make many errors in hiragana words.",
"For example, both neural models make errors in the sentence | | | | (jyakusya ga to:ta sarete weaklings lose to natural selection).",
"LSTM makes a segmentation mistake ( | ) and SAN does a POS tagging mistake, while Juman++ produces the correct answer.",
"It knows that is a special type of noun that is often followed by from POS tags.",
"Hiragana-based spellings of most content words are somewhat rare in Japanese, and NN models do not have enough training data for these spellings.",
"It could be possible to improve the situation by using data augmentation techniques.",
"Another frequent problem is segmentation and tagging of proper nouns.",
"We believe that this problem could be solved by data augmentation, but we leave this as future work.",
"We presented a novel way to train small neural models for Japanese Morphological analysis by directly feeding the network a large number of silver training data.",
"Our method achieves new SOTA on web domain when combining the silver data with gold one.",
"This is an empirical evidence that there is no need for feature engineering for neural morphological analysis at all.",
"A neural network can learn implicit dictionary information itself and it does not need to be large.",
"We also show that training by mixing the data together works better than fine-tuning and is more stable.",
"Our work can be extended in the future in different ways.",
"We will consider how to make the model to recognize new words, which is an important feature for a practical analyzer.",
"Using tri-training also seems to be a natural extension for this work.",
"It is easy to provide diverse models, required for tri-training, by using different types of encoder and varying network parameters.",
"Furthermore, our tagging approach should be universal and work with other tasks like named entity recognition.",
"A method to incorporate tags with a large number of possible values (like readings and lemmas) without introducing embeddings for them, hence keeping the models small, could also be a useful extension."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Knowledge graphs suffer from sparsity which degrades the quality of representations generated by various methods.",
"While there is an abundance of textual information throughout the web and many existing knowledge bases, aligning information across these diverse data sources remains a challenge in the literature.",
"Previous work has partially addressed this issue by enriching knowledge graph entities based on hard co-occurrence of words present in the entities of the knowledge graphs and external text, while we achieve soft augmentation by proposing a knowledge graph enrichment and embedding framework named EDGE .",
"Given an original knowledge graph, we first generate a rich but noisy augmented graph using external texts in semantic and structural level.",
"To distill the relevant knowledge and suppress the introduced noise, we design a graph alignment term in a shared embedding space between the original and augmented graph.",
"To enhance the embedding learning on the augmented graph, we further regularize the locality relationship of target entity based on negative sampling.",
"Experimental results on four benchmark datasets demonstrate the robustness and effectiveness of EDGE in link prediction and node classification.",
"Knowledge Graph (KG) 1 embedding learning has been an emerging research topic in natural language processing, which aims to learn a low dimensional latent vector for every node.",
"One major challenge is sparsity.",
"Knowledge graphs are often incomplete, and it is a challenge to generate low-dimensional representations from a graph with many missing edges.",
"To mitigate this issue, auxil-1 Knowledge graph usually represents a heterogeneous multigraph whose nodes and relations can have different types.",
"However in the work, we follow (Kartsaklis et al., 2018), consider knowledge graph enrichment problem where only one relation type (connected or not) appears.",
"iary texts that are easily accessible have been popularly exploited for enhancing the KG (as illustrated in Figure 1).",
"More specifically, given that KG entities contain textual features, we can link them to an auxiliary source of knowledge, e.g. , WordNet, and therefore enhance the existing feature space.",
"With notable exceptions, the use of external textual properties for KG embedding has not been extensively explored before.",
"Recently, (Kartsaklis et al., 2018) used entities of the KG to query BabelNet (Navigli and Ponzetto, 2012), added new nodes to the original KG based on co-occurrence of entities, and produced more meaningful embeddings using the enriched graph.",
"However, this hard-coded, co-occurrence based KG enrichment strategy fails to make connections to other semantically related entities.",
"As motivated in Figure 1, the newly added entities wound \", arthropod \" and protective body \", are semantically close to some input KG entity nodes (marked in red).",
"However, they cannot be directly retrieved from BabelNet using co-occurrence matching.",
"In this paper, we aim to address the sparsity issue by integrating a learning component into the process.",
"We propose a novel framework, EDGE , for KG enrichment and embedding.",
"EDGE first constructs a graph using the external text based on similarity and aligns the enriched graph with the original KG in the same embedding space.",
"It infuses learning in the knowledge distillation process by graph alignment, ensuring that similar entities remain close, and dissimilar entities get as far from each other.",
"Consuming information from an auxiliary textual source helps improve the quality of final products, i.e. , low dimensional embeddings, by introducing new features.",
"This new feature space is effective because it is obtained from a distinct knowledge source and established based on affinity captured by the learning component of our model.",
"More specifically, our framework takes KG , and an external source of texts, T , as inputs, and generates an augmented knowledge graph, a KG .",
"in generating a KG we are mindful of semantic and structural similarities among KG entities, and we make sure it contains all the original entities of KG .",
"This ensures that there are common nodes in two graphs which facilitates the alignment process.",
"To align KG and a KG in the embedding space, a novel multi-criteria objective function is devised.",
"In particular, we design a cost function that minimizes the distance between the embeddings of the two graphs.",
"As a result, textual nodes ( e.g. , blue nodes in Figure 1) related to each target entity are rewarded while unrelated ones get penalized in a negative sampling setting.",
"Extensive experimental results on four benchmark datasets demonstrate that EDGE outperforms state-of-the-art models in different tasks and scenarios, including link prediction and node classification.",
"Evaluation results also confirm the generalizability of our model.",
"We summarize our contributions as follows:",
"(i) We propose EDGE , a general framework to enrich knowledge graphs and node embeddings by exploiting auxiliary knowledge sources.",
"(ii) We introduce a procedure to generate an augmented knowledge graph from external texts, which is linked with the original knowledge graph.",
"(iii) We propose a novel knowledge graph embedding approach that optimizes a multi-criteria objective function in an end-to-end fashion and aligns two knowledge graphs in a joint embedding space.",
"(iv) We demonstrate the effectiveness and generalizability of EDGE by evaluating it on two tasks, namely link prediction and node classification, on four graph datasets.",
"The rest of the paper is organized as follows.",
"In the next section, we try to identify the gap in the existing literature and motivate our work.",
"Next, in Section 3, we set up the problem definition and describe how we approach the problem by in-depth explanation of our model.",
"We evaluate our proposed model by experimenting link prediction and node classification on four benchmark datasets and present the results and ablation study in Section 4. Finally, we conclude our work and give the future direction in Section 5. 2 Related Work Knowledge graph embedding learning has been studied extensively in the literature (Bordes et al., 2013; Wang et al., 2014; Yang et al., 2015; Sun et al., 2019; Zhang et al., 2019; Xian et al., 2020; Yan et al., 2020; Sheu and Li, 2020).",
"A large number of them deal with the heterogeneous knowledge graph, where it appears different types of edges.",
"While in this work we consider the type of knowledge graph with only one type (i.e. connected or not) of relation, and only focus on entity embedding learning.",
"Our work is related to graph neural networks, such as the graph convolutional networks (GCN) (Kipf and Welling, 2017) and its variants (Wu et al., 2020; Jiang et al., 2019, 2020), which learn node embeddings by feature propagation.",
"In the following, we mainly review the most relevant works in two aspects, i.e., graph embedding learning with external text and knowledge graph construction.",
"The most similar line of work to ours is where an external textual source is considered to enrich the graph and learn low dimensional graph embeddings using the enriched version of the knowledge graph.",
"For instance, (Wang and Li, 2016) annotates the KG entities in text, creates a network based on entity-word co-occurrences, and then learns the enhanced KG.",
"Similarly, (Kartsaklis et al., 2018) adds an edge ( e, t ) to KG per entity e based on co-occurrence and finds graph embeddings using random walks.",
"However, there is no learning component in these approaches in constructing the new knowledge graph.",
"And the enrichment procedure is solely based on occurrences (hard\" matching) of entities in the external text. For graph completion task, (Malaviya et al., 2020) uses pre-trained language models to improve the representations and for Question Answering task, (Sun et al., 2018) extracts a sub-graph G q from KG and Wikipedia, which contains the an-G CNb a s e d D ec od e r Augment GCNb a s e d E n c od e r GCNb a s e d E n c od e r GCNb a s e d D ec od e r graph alignment locality preserving regularization forward path graph alignment locality preserving regularization Legend : existing entities newly added entities Figure 2: Our proposed framework for aligning two graphs in the embedding space. The graph alignment component, LJ , requires an additional matrix, R , that selects embeddings of KG entities from ZT , so the resulting matrix, RZT , would have the same dimension as ZK . Furthermore, LN penalizes additional entities that are unrelated to the target entity, while rewards the related ones. We omit the graph reconstruction loss for simplicity. swer to the question with a high probability and apply GCN on G q which is limited to a specific task. We emphasize that the main difference between our model and previous work is that we first create an augmented knowledge graph from an external source, and improve the quality of node representation by jointly mapping two graphs to an embedding space. To the best of our knowledge, this is the first time that a learning component is incorporated to enriching knowledge graphs. 2.2 Knowledge Graph Construction Knowledge graph construction methods are broadly classified into two main groups: 1) Curated approaches where facts are generated manually by experts, e.g. , WordNet (Fellbaum, 1998) and UMLS (Bodenreider, 2004), or volunteers such as Wikipedia, and 2) Automated approaches where facts are extracted from semi-structured text like DBpedia (Auer et al., 2007), or unstructured text (Carlson et al., 2010). The latter approach can be defined as extracting structured information from unstructured text. In this work, we do not intend to construct a knowledge base from scratch, instead we aim to generate an augmented knowledge graph using side information. Hence, we employ existing tools to acquire a set of new facts from external text and link them to an existing KG. 3 Proposed Model 3.1 Problem Statement We formulate the knowledge graph enrichment and embedding problem as follows: given a knowledge graph KG = ( E , R , X ) with |E| nodes (or enti-ties), |R| edges (or relations) and X R |E| D as feature matrix, where D is the number of features per entity, also given an external textual source, T , the goal is to generate an augmented knowledge graph and jointly learn d ( d << |E| ) dimensional embeddings for knowledge graph entities, which preserve structural and semantic properties of the knowledge graph. The learned representations are then used for the tasks of link prediction and node classification. Link prediction is defined as a binary classification whose goal is to predict whether or not an edge exists in KG, and node classification is the task of determining node labels in labelled graphs. To address the problem of knowledge graph enrichment and embedding, we propose EDGE , a framework that contains two major components, i.e. , augmented knowledge graph construction, and knowledge graph alignment in a joint embedding space. 3.2 Augmented Knowledge Graph Construction Given the entities of KG and an external source of textual data, T , we aim to generate an augmented graph, a KG , which is a supergraph of KG ( i.e. , KG is a subgraph of a KG ). Augmentation is the process of adding new entities to KG . These newly added entities are called textual entities or textual nodes . A crucial property of a KG is that it contains entities of KG . The presence of these entities establishes a relationship between the two graphs, and such a relationship will be leveraged to learn the shared graph embeddings. To construct a KG , we need to find a set of keywords to query an external source, To obtain high quality keywords and acquire new textual entities, we design the following procedure per target entity e t (For every step of this process refer to Table 1 for a real example from SNOMED dataset). First, we find a set of semantically and struc-Table 1: We employ representation learning algorithms to find a set of semantically and structurally similar entities to each target entity (column 2). We then find a set of keywords, K , that are representative of the target entity (column 3) and use them to query an external text and obtain a set of sentences, S (column 4). Finally, we extract textual entities (column 5), and connect them to the target entity. TargetEntity semanticallyandstructurallySimilarEntities Mostdefinitivekeywords Sentencesobtainedfromauxiliarytext Entitiesobtainedfrominformationextraction Nonvenomousinsectbiteofhipwithoutinfection s e m a n ti c 1. Nonvenomousinsectbiteoffootwithinfection 2. Crushinginjuryofhipand/orthigh 3. Superficialinjuryoflipwithinfection 4. Infectedinsectbiteofhand 1. bite 2. insect 3. nonvenomous 4. infect 1. awoundresultingfrombitingbyananimaloraperson 2. smallair-breathingarthropod 3. notproducingorresultingfrompoison 4. contaminatewithadiseaseormicroorganism 1. wound 2. arthropod 3. poison 4. microorganism s t r u c t u r a l 1. Insectbite,nonvenomous,ofback 2. Tickbite 3. Animalbiteofcalf 4. Insetbite,nonvenomous,offootandtoe Insectbite,nonvenomous,offootandtoeinfected s e m a n ti c 1. Insectbite,nonvenomous,oflowerlimb,infected 2. Infectedinsectbiteofhand 3. Insectbite,nonvenomous,ofhip 4. Insectbitegranuloma 1. bite 2. insect 3. lower 4. skin 1. awoundresultingfrombitingbyananimaloraperson 2. smallair-breathingarthropod 3. movesomethingorsomebodytoalowerposition 4. anaturalprotectivebody 1. wound 2. arthropod 3. position 4. protectivebody s t r u c t u r a l 1. Nonvenomousinsectbiteofhipwithoutinfection 2. Insectbite,nonvenomous,ofback 3. Recurrentinfectionofskin 4. Skinstructureoflowerleg turally similar entities to e t denoted by E e t . This set creates a textual context around e t which we use to find keywords to query an external text, e.g. , WordNet or Wikipedia. Here by query we mean using the API of the external text to find related sentences, S (for instance for a given keyword bite we can capture several sentences from the wikipedia page for the entry biting or find several Synsets 2 from WordNet when we search for bite).",
"Finally, we extract entities from S and attach them to e t .",
"We call these new entities, textual entities or textual features .",
"By connecting these newly found textual entities to the e t , we enhance KG and generate the augmented knowledge graph, a KG .",
"We observed that the new textual entities are different from our initial feature space.",
"Also, it is possible that two different target entities share one or more textual nodes, hence the distance between them in a KG would decrease.",
"The implementation details of this process is provided in Supplementary materials.",
"Querying an external text allows us to extend the feature space beyond the context around e t .",
"By finding other entities in KG that are similar to the target entity and extracting keywords from the collection of them to query the external text, distant entities that are related but not connected would become closer to each other owing to the shared keywords.",
"Figure 1 illustrates a subset of SNOMED graph and its augmented counterpart by following the above procedure.",
"As this figure reveals, the structure of a KG is different from KG , and as a result of added textual nodes, distant but similar enti-2 Synset is the fundamental building block of WordNet which is accompanied by a definition, example(s), etc. ties would become closer.",
"Therefore, augmenting knowledge graphs would alleviate the KG sparsity issue.",
"Although we may introduce noise by adding new entities but later in the alignment process we address this issue.",
"Remarks.",
"In the above procedure, we need to obtain similar entities before looking for textual entities, and the rationality of such a strategy is discussed as follows.",
"One naive approach is to simply use keywords included in the target entity to find new textual features.",
"In this way, we would end up with textual features that are related to that target entity, but we cannot extend the feature space to capture similarity ( i.e. , dependency) among entities.",
"With the help of augmented knowledge graph a KG , we aim to enrich the graph embeddings of KG .",
"However, inevitably, a portion of newly added entities are noisy, and even potentially wrong.",
"To mitigate this issue, we are inspired by Hinton et al. (Hin-ton et al., 2015), and propose a graph alignment process for knowledge distillation.",
"In fact, a KG and KG share some common entities, which makes it possible to map two knowledge graphs into a joint embedding space.",
"In particular, we propose to extract low-dimensional node embeddings of two knowledge graphs using graph auto-encoders (Kipf and Welling, 2016), and design novel constraints to align two graphs in the embedding space.",
"The architecture of our approach is illustrated in Figure 2. Let AK and AT denote the adjacency matrices of KG and a KG , respectively.",
"The loss functions of graph auto-encoders that reconstruct knowledge graphs are defined as: LK = min ZK || AK AK || 2 , (1) LT = min ZT || AT AT || 2 , (2) where AK = ( ZKZ (cid:62) K ) is the reconstructed graph using node embeddings ZK .",
"And ZK is the output of graph encoder that is implemented by a two-layer GCN (Kipf and Welling, 2016): ZK = GCN ( AK , XK ) = AK tanh( AKXKW 0 ) W 1 , (3) where AK = D 12 KAKD 12 K .",
"DK is the degree matrix, tanh ( . ) is the Hyperbolic Tangent function that acts as the activation function of the neurons, W i are the model parameters, and XK is the feature matrix.",
"3 Similarly, AT = ( ZTZ (cid:62) T ) , and ZT is learned by another two-layer GCN.",
"Equations (1) and (2) are l 2 -norm based loss functions that aim to minimize the distance between original graphs and the reconstructed graphs.",
"Furthermore, to map KG and a KG to a joint embedding space and align their embeddings through common entities, we define the following graph alignment loss function: LJ = || ZK RZT || 2 , (4) where R is a transform matrix that selects common entities that exist in KG and a KG .",
"Note that the two terms ZK and RZT should be of the same size in the L 2 norm equation.",
"Our motivation is to align the embeddings of common entities across two knowledge graphs.",
"By using R , the node embeddings of common entities can be selected from ZT .",
"Note that ZT is always larger than ZK , as KG is a subgraph of a KG .",
"Equation (4) also helps preserve local structures of the original knowledge graph KG in the graph embedding space.",
"In other words, nodes that are close to each other in the original knowledge graph will be neighbors in the augmented graph as well.",
"Moreover, we notice that the proposed augmented knowledge graph a KG involves more complicated structures than the original knowledge graph KG , due to the newly added textual nodes for each target entity in KG .",
"In a KG , one target entity 3 In case of a featureless graph, an identity matrix, I , replaces XK .",
"is closely connected to its textual nodes, and their embeddings should be very close to each other in the graph embedding space.",
"However, such local structures might be distorted in the graph embedding space.",
"Without proper constraints, it is possible that one target entity is close to textual entities of other target entities in the embedding space, which is undesired for downstream applications.",
"To address this issue, we design a margin-based loss function with negative sampling to preserve the locality relationship as follows: LN = log( ( z (cid:62) e z t )) log( ( z (cid:62) e z t (cid:48) )) , (5) where z t are the embeddings of the related textual nodes, z (cid:48) t are the embeddings of textual nodes that are not related to the target entity, and is the sigmoid function.",
"where , , and are hyper-parameters.",
"We perform full-batch gradient descent using the Adam optimizer to learn all the model parameters in an end-to-end fashion.",
"The whole training process of our approach is summarized in Algorithm 1. The learned low-dimensional node embeddings ZK could benefit a number of unsupervised and supervised downstream applications, such as link prediction and node classification.",
"Link prediction is the task of inferring missing links in a graph, and node classification is the task of predicting labels to vertices of a (partially) labeled graph.",
"Extensive evaluations on both tasks will be provided in the experiment section.",
"We have proposed a general framework for graph enrichment and embedding by exploiting auxiliary knowledge sources.",
"What we consider as a source of knowledge is a textual knowledge base that can provide additional information about the entities of the original knowledge graph.",
"It is a secondary source of knowledge that supplies new sets of features outside of the existing feature space, which improves the quality of representations.",
"The proposed graph alignment approach can fully exploit augmented knowledge graph and thus improve the graph embeddings.",
"Although a KG is a supergraph of KG , its connectivity pattern is different.",
"With the help of our customized loss function for graph alignment, both graphs contribute in the quality of derived embeddings.",
"We will also demonstrate the superiority of our joint embedding approach over the independent graph embedding approach (with only a KG ) in the experiments, and we investigate which component of our model contributes more in the final performance in the ablation study in Subsection 4.4.",
"We design our experiments to investigate effectiveness of different components of EDGE as well as its overall performance.",
"To this end, we aim to answer the following three questions 4 .",
"Q 1 How well does EDGE perform compared to state-of-the-art in the task of link prediction?",
"(Section 4.1)",
"Q 2 How is the quality of embeddings generated by EDGE compared to similar methods?",
"(Sec-tions 4.2 and 4.3)",
"Q 3 What is the contribution of each component (augmentation and alignment) in the overall performance?",
"(Section 4.4) 4 We plan to release our code upon publication.",
"To investigate Q1 we perform link prediction on four benchmark datasets, and compare the performance of our model with five relevant baselines.",
"For this task we consider SNOMED and three citation networks.",
"For SNOMED, similar to (Kart-saklis et al., 2018), we select 21K medical concepts from the original dataset.",
"Each entity in SNOMED is a text description of a medical concept, e.g. , Nonvenomous insect bite of hip without infection .",
"According to the procedure explained in subsection 3.2, we construct an augmented knowledge graph, a KG .",
"Additionally, we consider three other datasets, namely Cora, Citeseer, and PubMed, which are citation networks consisting of 2,708, 3,312, and 19,717 papers, respectively.",
"In all three datasets, a short text accompanies each node which is extracted from the title or abstract of the paper.",
"For these networks, relation is defined as citation and the textual content of the nodes enables us to obtain a KG .",
"Cora and Citeseer datasets come with a set of default features.",
"We defer the detailed description of datasets in the supplementary.",
"In this experiment, for each dataset, we train the model on 85% of the input graph.",
"Other 15% of the data is split into 5% validation set and 10% as part of the test set (positive samples only).",
"An additional set of edges are produced, equal to the number of positive samples, which does not exist in the graph, as negative samples.",
"The union of positive and negative samples are used as the test set.",
"In all baselines, we test the model on KG .",
"We obtain the following values for loss ratios after hyper-parameter tuning: = 0 .",
"001 , = 10 , = 1 .",
"We discuss parameter tuning and explain the small value of in Section 4.5.",
"We provide comparison against VGAE (Kipf and Welling, 2016) and its adversarial variant ARVGE (Pan et al., 2018).",
"Also we consider LoNGAE (Tran, 2018), SCAT (Zou and Lerman, 2019) and Table 3: Node classification results in terms of accuracy for citation networks.",
"GIC (Mavromatis and Karypis, 2020) which are designed for link prediction task on graphs, hence they make strong baselines.",
"Table 2 presents the Area Under the ROC Curve (AUC) and average precision (AP) scores for five baselines and our methods across all datasets.",
"We observe that EDGE outperforms all baselines in three out of four datasets and produces comparable results for PubMed dataset.",
"To evaluate the quality of embeddings (Q2) we design a node classification task based on the final product of our model.",
"For this task, we use Cora, Citeseer and PubMed datasets, and follow the same procedure explained in 3.2 to generate a KG and jointly map the two graphs into an embedding space.",
"All the settings are identical to Task 1. To perform node classification, we use the final product of our model, which is a 160 dimensional vector per node.",
"We train a linear SVM classi-fier and obtain the accuracy measure to compare the performance of our model with state-of-the-art methods.",
"Training ratio varies across different datasets, and we consider several baselines to compare our results against.",
"We compare our approach with state-of-the-art semi-supervised models for node classification, including GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), LoNGAE (Tran, 2018), and MixHop (Abu-El-Haija et al., 2019).",
"These models are semi-supervised, thus they were exposed to node labels during training while our approach is completely unsupervised.",
"We also include DeepWalk, an unsupervised approach, to have a more complete view for our comparison.",
"Table 3 reveals that our model achieves reasonable performance compared with semi-supervised models in two out of three datasets.",
"Since EDGE",
"is fully unsupervised, it is fair to declare that its performance is comparable as other methods are exposed to more information ( i.e. , node labels).",
"Further, to measure the quality of embeddings produced by our model and compare it against the baseline, we visualize the similarity matrix of node embeddings for two scenarios on the Cora dataset: 1) GAE on KG , and 2) EDGE on KG and a KG .",
"The results are illustrated in Figure 3. In this heatmap, elements are pair-wise similarity values sorted by different labels (7 classes).",
"We can observe that the block-diagonal structure learned by our approach is clearer than that of GAE, indicating enhanced separability between different classes.",
"Next, we examine our model in more details and study how different parameters affect its performance.",
"To investigate the effectiveness of different modules of our model (Q3), we consider two scenarios.",
"First we use a single graph to train our model.",
"Note that when we use a single graph, the graph alignment and locality preserving losses are discarded and our model is reduced to GAE.",
"In single graph scenario we consider two versions of augmented graph, a KG that was explained in subsection 3.2",
"In the second scenario, we use two graphs to jointly train EDGE , and we feed our model with KG + a KG and KG + a KG to show the effect of augmentation.",
"For link prediction we only consider SNOMED dataset which is the largest dataset, and as Table 4 presents we observe that our augmentation process is slightly more effective than co-occurrence based augmentation.",
"More importantly, by comparing second two rows with first two rows we realize that alignment module improves the performance more than augmentation process which highlights the importance of our proposed joint learning method.",
"Moreover, we repeat this exercise for node classification (see Table 5) which results in a similar trend across all datasets.",
"Finally, we plot the t-SNE visualization of embedding vectors of our model with and without features.",
"Figure 4 clearly illustrates the distinction between quality of the clusters for the two approaches.",
"This implies that knowledge graph text carries useful information.",
"When the text is incorporated into the model, it can help improve the model performance.",
"We evaluate the parameterization of EDGE , and specifically we examine how changes to hyper parameters of our loss function ( i.e. , , and ) could affect the model performance in the task of link prediction on Cora dataset.",
"In each analysis, we fix the values of two out of three parameters and study Table 5: Node classification results in terms of accuracy for citation networks.",
"The detailed results are shown in Figure 5. Figure 5a shows the effect of varying , when = 1 and = 1 are fixed.",
"We observe a somewhat consistent trend across performance for different values of .",
"It is evident that decreasing improves the performance.",
"is the coefficient of LT (see Equation 2).",
"This examination suggests that the effect of this loss function is less significant, because we re-address it in the LN part of the loss function, where we consider the same graph ( a KG ) and try to optimize distance between its nodes but with more constraints.",
"Figure 5b illustrates the effect of varying , while = 1 and = 1 are fixed.",
"Tuning results in more radical changes in the model performance, which is again consistent between the two datasets.",
"Small values for degrades performance remarkably, and we observe a much more improved AUC score for larger values of .",
"This implies the dominant effect of the joint loss function, LJ , which is defined as the distance between corresponding entities of KG and a KG .",
"Next, we fix = 1 and = 1 and tweak from 0 .",
"1 to 10 .",
"As Figure 5c reveals, the variation in performance is very small.",
"Finally, as we obtained the best results when = 10 , we set = 1 and once again tune .",
"Figure 5d shows the results for this updated setting.",
"These experiments confirm the insignificance of parameter .",
"In practice, we obtained the best results by setting to 0.001.",
"Sparsity is a major challenge in KG embedding, and many studies failed to properly address this issue.",
"We proposed EDGE , a novel framework to enrich KG and align the enriched version with the original one with the help of auxiliary text.",
"Using external source of information introduces new sets of features that enhance the quality of embeddings.",
"We applied our model on three citation networks and one large scale medical knowledge graph.",
"Experimental results show that our approach outperforms existing graph embedding methods on link prediction and node classification.",
"This research is supported in part by the U.S. Army Research Office Award under Grant Number W911NF-21-1-0109, and a gift funding from Adobe Research."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"method",
"result",
"other"
] |
[
"This paper studies the task of comparative preference classification (CPC).",
"Given two entities in a sentence, our goal is to classify whether the first (or the second) entity is preferred over the other or no comparison is expressed at all between the two entities.",
"Existing works either do not learn entity-aware representations well and fail to deal with sentences involving multiple entity pairs or use sequential modeling approaches that are unable to capture long-range dependencies between the entities.",
"Some also use traditional machine learning approaches that do not generalize well.",
"This paper proposes a novel Entity-aware Dependency-based Deep Graph Attention Network (ED-GAT) that employs a multihop graph attention over a dependency graph sentence representation to leverage both the semantic information from word embeddings and the syntactic information from the dependency graph to solve the problem.",
"Empirical evaluation shows that the proposed model achieves the state-of-the-art performance in comparative preference classification.",
"Given a sentence that contains two entities of interest, the task of Comparative Preference Classification is to decide whether there is a comparison between the two entities and if so, which entity is preferred (Jindal and Liu, 2006a; Ganapathibhotla and Liu, 2008; Liu, 2012; Panchenko et al., 2019).",
"For example, considering sentence s 1 (shown in Table 1), there is a comparison between the two underlined entities, and golf is preferred over baseball.",
"This sentence contains explicit comparative predicate easier .",
"The task seems straightforward but is quite challenging due to many counterexamples.",
"For example, s 2 shows that better may not indicate a comparison.",
"s 3 , another counterexample, shows that slower indeed indicates a ID Sentences s 1 Golf is easier to pick up than baseball.",
"Problem statement.",
"Given a sentence s = (cid:104) w 1 , w 2 , ..., e 1 , ..., e 2 , ...w n (cid:105) , where e 1 and e 2 are entities consisting of a single word or a phrase, and e 1 appears before e 2 in the sentence, our goal is to classify the comparative preference direction between these two entities into one of the three classes: { BETTER, WORSE, NONE } .",
"BETTER (WORSE) means e 1 is preferred (not preferred) over e 2 .",
"NONE means that there is no comparative relation between e 1 and e 2 .",
"Although closely related, Comparative Preference Classification (CPC) is different from Comparative Sentence Identification (CSI), which is a 2-class classification problem that classifies a sentence as a comparative or a non-comparative sentence.",
"In previous work, Jindal and Liu (2006a) did CSI without considering which two entities are involved in a comparison.",
"Tkachenko and Lauw (2015) employed some dependency graph features to approach the CSI task given two entities of interest.",
"In this entity-aware case, syntactic features are crucial.",
"However, not using word embeddings in the model makes the model harder to generalize with a good performance given various ways of expressing comparisons.",
"Panchenko et al. (2019) gave the state-of-the-art result on the CPC task by using a pretrained sentence encoder to produce sentence embeddings as a feature for classification.",
"However, this model is not entity-aware and does not use the dependency graph information.",
"We explain the reason as follows.",
"For example, the dependency graph information gives a clue that the underlined entities in s 2 of Table 1 are not involved in a comparison, although there is a comparative indicator better in the sentence.",
"s 3 (also refer to Figure 1) has two entity pairs, which make an entity-aware model necessary.",
"The pair of entities, tools and K9 , are far away from each other in the sequence.",
"But in the dependency graph, they are just two hops away from each other and one hop away from the key comparative predicate slower .",
"For the pair of entities, Perl and Python , although both are sequentially near to the word slower , the dependency graph information does not indicate they are involved in a comparison.",
"We see that an entity-aware model can avoid the mistake of taking comparative predicates not associated with the entity pair as an evidence.",
"Also, the dependency graph of a sentence contains important clues that can benefit the comparative preference classification.",
"Methods, which are not entity-aware and do not model dependency structures, are not capable of dealing with the cases in s 2 and s 3 .",
"To address the limitations of the previous models, we propose a novel Entity-aware Dependency-based Deep Graph Attention Network (ED-GAT) for comparative preference classification.",
"We represent a sentence by its dependency graph.",
"This Graph Attention Network (GAT) (Velickovic et al., 2018) based model can naturally fuse word semantic information and dependency information within the model.",
"By building a deep graph attention network stacking several self-attention layers, the model can effectively capture long-range dependencies, which is beneficial for identifying the comparison preference direction between two entities.",
"We have applied this model on a real-world benchmark dataset, and the results show that incorporating the dependency graph information greatly helps this task.",
"It outperforms strong and latest baselines, as discussed in the experiments.",
"In this section, we first give a brief introduction to the GAT model.",
"We then present the proposed ED-GAT model and discuss how to apply it to the CPC task.",
"The critical component of our model is the Graph Attention Network (GAT) (Velickovic et al., 2018), which fuses the graph-structured information and node features within the model.",
"Its masked self-attention layers allow a node to attend to neighborhood features and learn different attention weights for different neighboring nodes.",
"The node features fed into a GAT layer are X = [ x 1 , x 2 , ... x i , ... x n ] , x i RF , where n is the number of nodes, F is the feature size of each node.",
"The attention mechanism of a typical GAT can be summarized by equation (1).",
"Here, given the node feature vectors in GAT, node i attends over its 1-hop neighbors j N i .",
"(cid:107)",
"K k =1 denotes the concatenation of K multi-head attention outputs, h outi RF (cid:48) is the output of node i at the current layer, kij is the k -th attention between nodes i and j , W k RF (cid:48) K F is linear transformation, a k R 2 F (cid:48) K is the weight vector, and f ( ) is LeakyReLU non-linearity function.",
"Overall, the input-output for a single GAT layer is summarized as H out = GAT ( X , A ; l ) .",
"The input is X R n F and the output is H out R n F (cid:48) , where n is the number of nodes, F is the node feature size, F (cid:48) is GAT hidden size, and A R n n is the adjacency matrix of the graph.",
"We use the dependency parser in (Chen and Manning, 2014) to convert a sentence into a dependency parse graph.",
"Each word corresponds to a node in the graph.",
"The node features are the word embedding vectors, denoted as x i RF corresponding to node i .",
"The input node feature matrix is X R n F .",
"Note that an entity is either a single word or a multi-word phrase.",
"To treat each entity as GAT layer 1 GAT layer L ...",
"one node, we replace the whole entity word/phrase with EntityA or EntityB before parsing.",
"A multi-word entity embedding is obtained by averaging the embeddings of the words in the entity.",
"We observe that for a given node in the dependency parse graph, both its parents and children contain useful information for the task.",
"To make the ED-GAT model treat both its parents and children as neighbors, we simplify the original directed dependency graph into an undirected graph.",
"The structure of the graph is encoded into an adjacency matrix A R n n .",
"ED-GAT does not attend to all neighbors of a given node on an equal basis.",
"The attention weights to the neighbors are automatically learned during training based on their usefulness to the task, regardless of whether they are parents or children in the dependency graph.",
"The higher the attention weight given to a neighbor, the more useful this neighbor is to the task.",
"In a single GAT layer, a word or an entity in a graph only attends over the local information from 1-hop neighbors.",
"To enable the model to capture long-range dependencies, we stack L layers to make a deep model, which allows information from L -hops away to propagate to this word.",
"Our model is thus a deep graph attention network.",
"As illustrated in Figure 2, the stacking architecture is represented as H l +1 = GAT ( H l , A ; l ) , l 0 , H 0 = XW 0 + b 0 .",
"The output of the GAT layer l , H lout = GAT ( H l , A ; l ) , is the input for layer ( l + 1) , denoted by H l +1 .",
"H 0 is the initial input.",
"W 0 RF F (cid:48) and b 0 are the projection matrix and bias vector.",
"For a L layer ED-GAT model, the output of the final layer is HL out R n F (cid:48) .",
"We use a mask layer to fetch the two hidden vectors from H Lout , which corresponds to the two entities of interest: ( h e 1 , h e 2 ) = Masklayer( H Lout ) .",
"Next, we concatenate these two vectors as: v = [ h e 1 (cid:107) h e 2 ] and use a feed-forward layer with softmax function to project v into classes for prediction.",
"Here using h e 1 and h e 2 makes the ED-GAT model entity-aware as they are the output of the nodes corresponding to entities e 1 and e 2 , each of which attends over its neighbors' features in L hops in the graph and leverages both the word semantics and dependency structure information in learning.",
"The ED-GAT model is trained by minimizing the standard cross-entropy loss over training examples.",
"Many papers have been devoted to exploring comparisons in text.",
"For the CSI task, early works include those in (Jindal and Liu, 2006a; Ganapathibhotla and Liu, 2008).",
"More recently, Park and Blake (2012) employed handcrafted syntactic rules to identify comparative sentences in scientific articles.",
"For other languages such as Korean and Chinese, related works include (Huang et al., 2008), (Yang and Ko, 2009) and (Zhang and Jin, 2012).",
"Other works are interested in identifying entities, aspects and comparative predicates in comparative sentences, e.g., (Jindal and Liu, 2006b), (Hou and Li, 2008), (Kessler and Kuhn, 2014), (Kessler and Kuhn, 2013), and (Feldman et al., 2007).",
"Ganapathibhotla and Liu (2008) used lexicon properties to determine the preferred entities given the output of (Jindal and Liu, 2006b), which is quite different from our task.",
"There are also works related to product ranking using comparisons, such as those in (Kurashima et al., 2008), (Zhang et al., 2013), (Tkachenko and Lauw, 2014) and (Li et al., 2011).",
"All these related works solve very different problems in comparison analysis than our CPC task.",
"Works in NLP that use Graph Neural Networks and dependency graph structures include (Huang and Carley, 2019), (Guo et al., 2019).",
"But their tasks and models are different from ours.",
"We perform experiments using the benchmark CompSent-19 dataset (Panchenko et al., 2019), where each sentence has an entity pair ( e 1 , e 2 ) and its comparative preference label.",
"The original dataset is split into an 80% training set and a 20% test set.",
"During the experiment, we further Dataset Better Worse None Total Train 872(19%) 379(8%) 3,355(73%) 4,606 Dev 219(19%) 95(8%) 839(73%) 1,153 Test 273(19%) 119(8%) 1,048(73%) 1,440 Total 1,346 593 5,242 7,199 Table 2: Statistics of the CompSent-19 dataset split the original training data by randomly sampling 20% for each label as the development set for model selection.",
"The dataset statistics are given in Table 2.",
"The model is trained only on the newly split training set.",
"We use the class-based F1 score as the evaluation measure.",
"F1(B), F1(W) and F1(N) represent F1 score for classes BETTER, WORSE and NONE respectively.",
"F1-Micro is the average F1 score as in (Panchenko et al., 2019).",
"The Stanford Neural Network Dependency Parser (Chen and Manning, 2014) is used to build the dependency parse graph for each sentence.",
"In our experiment, we use two pretrained word embeddings: GloVe embeddings (Pennington et al., 2014) 1 and BERT embedding (Devlin et al., 2019) 2 .",
"The input of BERT is formatted as the standard BERT input format, with [CLS] before and [SEP] after the sentence tokens.",
"For this, we employ the BERT tokenizer to tokenize each word into word pieces (tokens).",
"The output of the pretrained-BERT model is a sequence of embeddings, each of size 768, and corresponds to a word piece.",
"We average the word piece embeddings of the original word to get the embedding for each word (node in the dependency graph).",
"Note that, word embeddings are kept frozen and not fine-tuned by the subsequent model structure.",
"For the ED-GAT model, we set the hidden size as 300.",
"The features of the nodes, which are the word embeddings, are first transformed into vectors of the hidden size and then fed into the ED-GAT model.",
"We use 6 attention heads, training batch size of 32, Adam optimizer (Kingma and Ba, 2014) with learning rate 5e-4, word embedding dropout rate (Srivastava et al., 2014) 0.3 and GAT attention dropout rate 0.",
"The implementation of the model is based on PyTorch Geometric (PyG) (Fey and Lenssen, 2019) and NVIDIA GPU GTX 1080 ti.",
"1 http://nlp.stanford.edu/data/glove.840B.300d.zip 2 For all our BERT related experiments, we use the pretrained BERT model: https://storage.googleapis.",
"com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip 4.3 Compared Models We compare models from the previous literature with several variations of our proposed model.",
"Majority-Class assigns the majority label in the training set to each instance in the test set.",
"SentEmbed given in (Panchenko et al., 2019) obtains sentence embeddings from a pretrained Sentence Encoder (Conneau et al., 2017; Bowman et al., 2015).",
"The sentence embedding 3 is then fed to XGBoost (Chen and Guestrin, 2016) for classification.",
"For a fair comparison, we also feed the sentence embedding into a linear layer.",
"They are represented as SentEmbed XGBoost and SentEmbed Linear .",
"SVM-Tree 4 given in (Tkachenko and Lauw, 2015) uses convolution kernel methods and dependency tree features to approach the CSI task.",
"We use the one-vs-rest technique to adapt this model to our three-class CPC task.",
"WordEmbed-Avg first constructs a sentence embedding by averaging the word embeddings of all words in a sentence, and then feeds it to a linear classifier.",
"Glove-Avg and BERT-Avg , respectively are the methods that use GloVe embeddings from GloVe.840B (Pennington et al., 2014) and static BERT embeddings (Devlin et al., 2019).",
"BERT-FT appends a linear classification layer on the hidden state corresponding to the first token [CLS] of the BERT sequence output and then fine-tunes the pretrained BERT weights on our task.",
"ED-GAT is the proposed model in this paper (Section 2.2).",
"We use both GloVe embeddings and BERT embeddings.",
"We use (L) to represent model variants with different numbers of layers and use the subscript to denote the type of embedding.",
"For example, ED-GAT GloVe (8) is the ED-GAT model using GloVe embedding, and the depth of the model is 8 layers.",
"We also add the LSTMBERT baseline, which uses the sequence output of a static BERT model to train an LSTM model.",
"The final hidden vector is used for classification.",
"As we see in Table 3, the state-of-the-art (SOTA) baseline is SentEmbed XGBoost .",
"SentEmbed Linear performs much worse than SentEmbed XGBoost .",
"This result shows that XGBoost classifies sentence embeddings much better than a linear layer.",
"Simply using word embedding average, GloVe-Avg 3 https://github.com/facebookresearch/InferSent 4 https://github.com/sitfoxfly/tree-svm Models Micro.",
"and BERT-Avg do not perform well.",
"The result of LSTMBERT shows that using BERT embedding sequentially is not suitable for our task.",
"BERT-FT fine-tunes BERT on our task, but its performance is below SOTA.",
"During experiments, we also found that the performance of BERT-FT is unstable.",
"The training process of the model quickly overfits the pretrained BERT weights.",
"For the ED-GAT model, we first tried to train embeddings only on this dataset by randomly initializing word embeddings as input.",
"As expected, the results were significantly poorer than those using the pre-trained embeddings, in part because our training data is very small (see Table 2).",
"As the baselines all use pretrained embeddings, we thus report the results of using pre-trained word embeddings in Table 3.",
"When employing Glove embeddings, surprisingly, ED-GAT GloVe (10) performs better than BERT-FT, which is based on a language model pretrained on a huge corpus.",
"We also tried to employ word2vec 5 for ED-GAT.",
"It got very similar results to those using the GloVe embeddings.",
"The Micro-F1 scores of using word2vec embeddings for the number of layers 8, 9, and 10 are 83.12, 83.33, and 84.86, respectively.",
"To be concise, we did not include these results in Table 3.",
"Our model also uses the static BERT embedding, which further improves the result.",
"Using static BERT embedding avoids overfitting.",
"On the one hand, it incorporates the rich semantic information with the BERT pretrained weights.",
"On the other hand, ED-GAT's ability to leverage dependency graph features greatly helps the model in capturing 5 GoogleNews-vectors-negative300.bin.gz ( https:// code.google.com/archive/p/word2vec/ ) Figure 3: Effects of the number of layers in ED-GAT the comparison between the entities and classifying the preference direction.",
"Our ED-GATBERT (8) reports the new state-of-the-art results for CPC task considering F1-Micro and all class-wise F1.",
"Effects of Model Depth.",
"From Figure 3, we see that increasing the number of stacked layers improves the performance of the model.",
"For ED-GAT GloVe , as GloVe does not contain the context information, the GAT structure based on the dependency graph greatly improves the result.",
"Even the 2-layer model achieves a good result.",
"ED-GATBERT does not have the same effect because the BERT embedding already contains rich semantic information.",
"But still, when the number of layers increases, ED-GATBERT becomes more powerful as it captures longer range dependencies.",
"This paper proposes a novel model called ED-GAT for Comparative Preference Classification.",
"It naturally leverages dependency graph features and word embeddings to capture the comparison and to classify the preference direction between two given entities.",
"Experimental results show that it outperforms all strong baselines and even BERT pretrained using a huge corpus.",
"Our future work aims to improve the CPC performance further.",
"Apart from that, we also plan to design novel models to perform the related tasks of entity extraction and aspect extraction from comparative sentences.",
"Performing all these tasks jointly in a multitask learning framework is a promising direction as well because it can exploit the shared features and the inherent relationships of these tasks to perform all tasks better.",
"This work was supported in part by two grants from National Science Foundation: IIS-1910424 and IIS-1838770, and a research gift from Northrop Grumman."
] | [
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other"
] |
[
"Abstract Token-level adaptive training approaches can alleviate the token imbalance problem and thus improve neural machine translation, through re-weighting the losses of different target tokens based on specific statistical metrics ( e.g., token frequency or mutual information).",
"Given that standard translation models make predictions on the condition of previous target contexts, we argue that the above statistical metrics ignore target context information and may assign inappropriate weights to target tokens.",
"While one possible solution is to directly take target contexts into these statistical metrics, the target-context-aware statistical computing is extremely expensive, and the corresponding storage overhead is unrealistic.",
"To solve the above issues, we propose a target-context-aware metric, named conditional bilingual mutual information (CBMI), which makes it feasible to supplement target context information for statistical metrics.",
"Particularly, our CBMI can be formalized as the log quotient of the translation model probability and language model probability by decomposing the conditional joint distribution.",
"Thus CBMI can be efficiently calculated during model training without any pre-specific statistical calculations and large storage overhead.",
"Furthermore, we propose an effective adaptive training approach based on both the tokenand sentence-level CBMI.",
"Experimental results on WMT14 English-German and WMT19 Chinese-English tasks show our approach can significantly outperform the Transformer baseline and other related methods.",
"Neural machine translation (NMT) (Bahdanau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017; Meng and Zhang, 2019; Liu et al., 2021a,b)",
"has made remarkable achievements in recent years.",
"Generally, NMT models are trained to maximize the likelihood of the next target token given ground-truth tokens as inputs (Johansen and Juselius, 1990; Goodfellow et al., 2016).",
"Due to the token imbalance phenomenon in natural language (Zipf, 1949), for an NMT model, the learning difficulties of different target tokens may be various.",
"However, the vanilla NMT model equally weights the training losses of different target tokens, irrespective of their difficulties.",
"Recently, various adaptive training approaches (Gu et al., 2020; Xu et al., 2021) have been proposed to alleviate the above problem for NMT.",
"Generally, these approaches re-weight the losses of different target tokens based on specific statistical metrics.",
"For example, Gu et al. (2020) take the token frequency as an indicator and encourage the NMT model to focus more on low-frequency tokens.",
"Xu et al. (2021) further propose the bilingual mutual information (BMI) to measure the word mapping diversity between bilinguals, and down-weight the tokens with relatively lower BMI values.",
"tions in these adaptive training approaches.",
"Given that the standard translation model autoregressively makes predictions on the condition of previous target contexts, we argue that the statistical metrics used in the above approaches ignore target context information and may assign inaccurate weights for target tokens.",
"Specifically, although existing statistical metrics can reflect complex characteristics of target tokens ( e.g., mapping diversity), they fail to model how these properties vary across different target contexts.",
"Secondly, for the identical target tokens in different positions of a target sentence ( e.g., two traffic ' tokens in the Figure 1), they may be mapped from different source-side tokens, but such target-context-free metrics cannot distinguish the above different mappings.",
"In summary, it is necessary to incorporate target context information into the above statistical metrics.",
"One possible solution is to directly take target context information into account and conduct target-context-aware statistical calculations.",
"But in this way, the calculation cost and storage overhead will become huge and unrealistic 1 .",
"Therefore, it is non-trivial to design a suitable target-context-aware statistical metric for adaptive training in the field of NMT.",
"In this paper, we aim to address the above issues in adaptive training methods.",
"Firstly, we propose a novel target-context-aware metric, named C onditional B ilingual M utual I nformation (CBMI), to measure the importance of different target tokens by their dependence on the source sentence.",
"Specifically, we calculate CBMI by the mutual information between a target token and its source sentence on the condition of its target contexts.",
"With the aid of target-context-aware calculations, CBMI can easily model the various characteristics of target tokens under different target contexts, and of course can distinguish identical target tokens with different source mappings.",
"Regarding the computational efficiency, through decomposing the conditional joint distribution in the aforementioned mutual information, our CBMI can be formalized as the log quotient of the translation model probability and language model probability 2 .",
"Therefore, CBMI can be efficiently calculated dur-1 Take the vanilla BMI (Xu et al., 2021) as an example, to process the raw WMT14 En-De training data (about 1.5GB), it takes about 12 CPU hours and 2GB disk storage to save the BMI values.",
"To make matters worse, the cost will increase dozens of times in target-context-aware statistical calculations.",
"2 The detailed derivation process is shown in Equation (7).",
"Please note that the language model is only used during training and thus does not affect the inference speed.",
"ing model training without any pre-specific statistical calculations and huge storage overhead, which makes it feasible to supplement target context information for statistical metrics.",
"Subsequently, we design an adaptive training approach based on both the tokenand sentence-level CBMI, which dynamically re-weights the training losses of the corresponding target tokens.",
"We evaluate our approach on the WMT14 English-German and WMT19 Chinese-English translation tasks.",
"Experimental results on both datasets demonstrate that our approach can significantly outperform the Transformer baseline and other adaptive training methods.",
"Further analyses reveal that CBMI can also reflect the adequacy of translation, and our CBMI-based adaptive training can improve translation adequacy meanwhile maintain fluency.",
"The main contributions of this paper can be summarized as follows: We propose a novel target-context-aware metric, named CBMI, which can reflect the importance of target tokens for NMT models.",
"Theoretical analysis and experimental results show that CBMI is computationally efficient, which makes it feasible to complement target context information in statistical metrics.",
"We further propose an adaptive training approach based on both the tokenand sentence-level CMBI, which dynamically re-weights the training losses of target tokens.",
"Further analyses show that CBMI can also reflect the adequacy of translation, and CBMI-based adaptive training can improve translation adequacy meanwhile maintain fluency 3 .",
"An NMT model is designed to translate a source sentence with M tokens x = { x 1 , x 2 , . . . , x M } into a target sentence with N tokens y = { y 1 , y 2 , . . . , y N } by predicting the probability of each target token:",
"where j is the index of each time step, y <j is the target-side previous context for y j , and is the model parameter.",
"During training, NMT models are generally optimized with the cross-entropy (CE) loss: LCE ( ) = N (cid:88) j =1 log p ( y j | y <j , x ; ) (2)",
"During inference, NMT models predict the probabilities of target tokens in an auto-regressive mode and generate hypotheses using heuristic search algorithms like beam search (Reddy, 1977).",
"Token-level adaptive training aims to alleviate the token imbalance problem for NMT models by re-weighting the training losses of target tokens.",
"How to design a suitable weight adjustment strategy matters, which is we aim to improve in this paper.",
"Formally, for the j -th target token and its adaptive weight w j , the standard cross-entropy loss in Equation (2) is expanded to the following formula: L ada ( ) = N (cid:88) j =1 w j log p ( y j | y <j , x ; ) (3) 2.3 Mutual Information for NMT Mutual information (MI) is a general metric in information theory (Shannon, 1948), which measures the mutual dependence between two random variables a and b as follows 4 : MI( a ; b ) = log (cid:18) p ( a, b ) p ( a ) p ( b ) (cid:19) (4) Xu et al. (2021) propose token-level bilingual mutual information (BMI) to measure the word mapping diversity between bilinguals and further conduct BMI-based adaptive training for NMT.",
"The BMI is formulated as: BMI( x ; y j ) = | x | (cid:88) i =1 log (cid:18) f ( x i , y j ) f ( x i ) f ( y j ) (cid:19) (5) where f ( ) is an word frequency counter.",
"Although BMI can reflect the bilingual mapping properties to some extent, it cannot correspondingly vary with the target context.",
"However, simply introducing target-context-aware calculations into BMI would make the above statistical calculations unrealistic.",
"In this section, we first introduce the definition of CBMI (Section 3.1).",
"Then, we illustrate how to adjust the weights for the training losses of target tokens based on the tokenand the sentence-level CBMI (Section 3.2).",
"Figure 2 shows the overall training process of our approach.",
"As mentioned above, it is necessary to incorporate target context information into the statistical metrics ( e.g., BMI) for adaptive training.",
"However, it is impractical to directly conduct target-context-aware statistical computations due to the expensive computational costs and storage overhead.",
"In this paper, we propose a new target-context-aware metric, named conditional bilingual mutual information (CBMI), to solve the above issues.",
"Specifically, CBMI is calculated by the mutual information between each target token and its source sentence under the condition of previous target context.",
"Formally, the CBMI of a target token y j and its source sentence x is calculated as follow: CBMI( x ; y j ) = MI ( x ; y j | y <j ) = log (cid:18) p ( y j , x | y <j ) p ( y j | y <j ) p ( x | y <j ) (cid:19) (6) The original CBMI definition presented in the above equation still struggles in computation, thus we further simplify it by decomposing the condi-2379 tional joint distribution: CBMI( x ; y j ) = log (cid:18) p ( y j , x | y <j ) p ( y j | y <j ) p ( x | y <j ) (cid:19) = log (cid:18) p ( y j | x , y <j ) p ( x | y <j ) p ( y j | y <j ) p ( x | y <j ) (cid:19) = log (cid:18) p ( y j | x , y <j ) p ( y j | y <j ) (cid:19) = log (cid:18) p NMT ( y j ) p LM ( y j ) (cid:19) (7) where p NMT ( y j ) is the probability output by the NMT model, and p LM ( y j ) is the probability output by an additional target-side language model (LM).",
"In this way, we formalize the complex target-context-aware calculation in Equation (6) as the log quotient of the NMT probability and LM probability.",
"Based on the simplified Equation (7), CBMI can be computed in real time during the model training, thus enabling both target-context-aware and efficient computations.",
"Considering the massive computation required by existing methods to perform the target-context-aware calculation, the LM in our CBMI only brings a modest computational cost in training and finally leads to better performance.",
"We will give a detailed comparison of the calculation cost and storage overhead between our CBMI and existing approaches in Section 5.2.",
"According to the definition, CBMI measures the mutual dependence between a target token and its corresponding source sentence on the condition of its context.",
"Namely, target tokens with larger CBMI value rely more on the source-side information and less on the target historical translations, which is exactly in line with the goal of the adequacy translation model.",
"Given that current NMT models tend to generate fluent but inadequate translations (Weng et al., 2020; Miao et al., 2021), we speculate that making the NMT models pay more attention to target tokens with larger CBMI values can improve translation adequacy and thus improve translation performance.",
"Furthermore, we observe a phenomenon that if target sentences contain many words with small CBMI values, they generally do not match well with the corresponding source sentences.",
"To alleviate the negative effect of these poorly matched sentence pairs, we average all the token-level CBMI values in a target sentence into a sentence-level CBMI and incorporate it into our approach.",
"Consequently, we propose to dynamically adjust the training weight of each target token based on both the tokenand sentence-level CBMI.",
"For clarity, we use t to mark the token-level' intermediate variables and s to mark the sentence-level' ones in the following formulas.",
"Token-Level CBMI.",
"The token-level CBMI can reflect the importance of target tokens for improving translation adequacy ( i.e. , dependency of the source side information).",
"Thus we amplify the weights of target tokens with larger token-level CBMI to make the NMT model pay more attention to them.",
"Particularly, to reduce the variances and stabilize the distribution of the token-level CBMI in each target sentence, we firstly conduct intra-sentence normalization for the token-level CBMI CBMI t ( x ; y j ) : CBMI tnorm ( x ; y j ) = (CBMI t ( x ; y j ) t ) / t (8) where t , t represent the mean values and the standard deviations of CBMI t ( x ; y j ) in each target sentence.",
"Then we scale the normalized CBMI value CBMI tnorm ( x ; y j ) to obtain the token-level training weight for y j : w tj = max { 0 , scale t CBMI tnorm ( x ; y j )+1 } (9) where scale t is a hyperparameter that controls the effect of CBMI tnorm ( x ; y j ) .",
"Sentence-level CBMI.",
"We average all the token-level CBMI values in a target sentence to form the sentence-level CBMI, which can further reflect the matching degree between the bilingual sentences in a sentence pair.",
"To alleviate the negative effect of poorly matched sentence pairs and encourage the NMT model focus on well-matched sentences pairs, we up-weight the sentence pairs with larger sentence-level CBMI values and downweight those sentence pairs with smaller sentence-level CBMI values.",
"Specifically, the sentence-level CBMI between the source sentence x and the target sentence y can be derived from Equation (4) and represented as the arithmetic average of token-level CBMI values 5 : 5 We divide the original sentence CBMI with its corresponding sentence length to reduce its variance.",
"CBMI s ( x ; y ) = 1 | y | log (cid:18) p ( x , y ) p ( x ) p ( y ) (cid:19) = 1 | y | log (cid:18) p ( y | x ) p ( y ) (cid:19) = 1 | y | log (cid:32)(cid:81) j p ( y j | x , y <j ) (cid:81) j p ( y j | y <j ) (cid:33) = 1 | y | (cid:88) j log (cid:18) p ( y j | x , y <j ) p ( y j | y <j ) (cid:19) = 1 | y | (cid:88) j CBMI t ( x ; y j ) (10)",
"Similarly, we conduct inter-sentence normalization for CBMI s ( x ; y ) :",
"where s , s represent the mean values and the standard deviations of CBMI s ( x ; y ) in each mini-batch during training.",
"Subsequently, we also scale CBMI snorm ( x ; y ) in Equation (11) with another hyperparameter scale s to obtain the sentence-level training weight: w s = max { 0 , scale s CBMI snorm ( x ; y ) + 1 } (12) Final Loss Weight.",
"In our adaptive training approach, for the target token y j , its final loss weight w j in Equation (3) is the multiplication of the above two weights in Equation (9) and (12): w j = w tj w s (13) 4 Experiments 4.1 Datasets We conduct experiments on two large-scale WMT tasks, i.e., the WMT14 English to German (En-De) and WMT19 Chinese to English (Zh-En).",
"For the En-De task, the training set contains 4.5M sentence pairs.",
"The validation set and test set are new-stest2013 and newstest2014, respectively.",
"For the Zh-En task, the training set totally contains 20M sentence pairs and the validation set and test set are newstest2018 and newstest2019, respectively.",
"Following previous work, we share the vocabulary for the En-De task and segment words into subwords using byte pair encoding (BPE) (Sennrich et al., 2016) with 32k merge operations for both datasets.",
"Training.",
"We implement baselines and our approach under Transformer base and Transformer big settings based on the open-source toolkit fairseq (Ott et al., 2019) with mixed precision (Ott et al., 2018).",
"We train all the translation models with the cross-entropy loss for 100k steps, and further finetune them with different adaptive training objectives for another 200k steps on both tasks.",
"The target-side language model is a Transformer decoder without the cross-attention modules, which is trained synchronously with the translation model.",
"The training data for the language model is the target-side monolingual data from the NMT training set.",
"All the experiments are conducted on 8 NVIDIA Tesla V100 GPUs, and each batch on each GPU contains approximately 4096 tokens.",
"We use Adam optimizer (Kingma and Ba, 2014) with 4000 warmup steps to optimize models.",
"More training details are listed in Appendix B. In our experiments, we have not been able to bring further improvement to our approach through simply enhancing the language model.",
"Our conjecture is that stronger language models will generate sharper distribution, and will increase the variances of CBMI values when used as the denominator, resulting in detriment for NMT model training.",
"We will leave this for the future work.",
"Evaluation.",
"During inference, we set beam size to 4 and length penalty to 0.6 for both tasks.",
"We use multibleu.perl to calculate case-sensitive BLEU for WMT14 En-De and SacreBLEU 6 to calculate case-sensitive BLEU for WMT19 Zh-En.",
"We use the paired bootstrap resampling methods (Koehn, 2004) for the statistical significance test.",
"In this section, we introduce the hyperparameter settings of our approach according to the performance on the validation set of the WMT14 En-De dataset, and we share the same hyperparameter settings with the WMT19 Zh-En dataset.",
"Scale Setting.",
"The two hyperparameter scale t and scale s in Equation (9) and Equation (12) determine the effects of token-level and sentence-level CBMI.",
"To investigate the effects of the two CBMI in different granularities, we firstly fix scale t to a moderate value, i.e. , 0.1, and tune scale s from 0.0 to 0.3 with the step of 0.05.",
"The detailed results are shown in Figure 3.",
"We observe that models perform better with larger scale s , which conforms with our conjecture in Section 3.2 that well-matched sentence pairs contribute more to NMT models.",
"Then 6 SacreBLEU hash: BLEU+case.mixed+numrefs.1 +smooth.exp+tok.13a+version.1.5.1.",
"we fix scale s to 0.3 and tune scale t in a similar way.",
"We find it better to keep scale t in a small range and too large value is harmful for models.",
"We conjecture that over-focus on the high-CBMI tokens brings another imbalance for training and may hurt the models.",
"Thus we set scale t to 0.1 in our following experiments.",
"We implement our approach based on the Transformer (Vaswani et al., 2017) and compare it with some mainstream adaptive training methods (de-tailed hyperparameter settings are provided in Appendix C).",
"Transformer.",
"We follow the standard base/big model configurations (Vaswani et al., 2017) to implement our baseline systems.",
"Freq-Exponential.",
"Gu et al. (2020) use monolingual token frequency to design an exponential weight function for token-level adaptive training: w j = A e T Count ( y j ) + 1 where A and T are two hyperparameters to adjusting the distribution of weights.",
"Freq-Chi-Square.",
"Gu et al. (2020) use the chi-square distribution to filter out extremely low frequency target tokens: w j = A Count ( y j ) 2 e T Count ( y j ) + 1 where A and T play the same roles as above.",
"BMI-adaptive.",
"Xu et al. (2021) calculate BMI (in Equation (5)) during the data pre-processing stage and scale it for adaptive loss weights.",
"Focal Loss.",
"Lin et al. (2017) propose the focal loss for objective detection tasks to solve the class imbalance problem.",
"Here we introduce it into NMT.",
"where and are hyperparameters to adjust the loss weight and p is the NMT predicted probability.",
"Anti-Focal Loss.",
"Raunak et al. (2020) design an anti-focal loss function to solve the long-tailed problem in NMT by incorporating the inductive bias of inference into training.",
"Self-Paced Learning.",
"Wan et al. (2020) calculate model confidence via Monte Carlo dropout sampling (Gal and Ghahramani, 2016) to measure the token difficulty and use it to re-weight the training losses of tokens.",
"Simple Fusion.",
"Stahlberg et al. (2018) propose two simple strategies (i.e., PRENORM and POSTNORM ) to fuse the NMT probabilities with the LM probablities and directly optimize the fusion during the NMT training process 7 .",
"LM Prior.",
"Baziotis et al. (2020) propose to distill the prior knowledge from LMs trained on rich-resource monolingual data to low-resource NMT models 8 : L lmp = LNMT + LKL ( p LM || p NMT ; ) (17) where weights the distillation term and is the softmax temperature (Hinton et al., 2015).",
"7 The results in Table 1 are the higher ones between the two strategies.",
"8 We did not use extra monolingual data for the LMs in Simple Fusion' and LM Prior' in our implementation for fair comparison.",
"The overall results on two WMT tasks based on the Transformer base and Transformer big configurations are shown in Table",
"1. Under the Transformer base setting, CBMI-based adaptive training can respectively improve +0.91 and +0.85 BLEU scores on En-De and Zh-En tasks compared to the Transformer baseline.",
"Compared to the most related yet target-context-free strategy BMI-adaptive', our CBMI-based adaptive training strategy can respectively yield significant improvements up to +0.46 and +0.44 BLEU scores on En-De and Zh-En, which demonstrate the significance of the target context for token assessment in token-level adaptive training.",
"Compared with the best performing baseline Self-Paced Learning', our approach still outperforms it by +0.32 and +0.46 BLEU scores on the two tasks.",
"Our conjecture is that CBMI not only reflects the model competence used in Self-Paced Learning' but also further incorporates the linguistic statistical information from the target-side LM, thus reflects more explicit translation property ( i.e., adequacy).",
"However, other LM enhanced methods ( e.g., Simple Fusion' and LM Prior') bring limited improvement or even degradation to the NMT models when there is no extra data for the LMs, which further proves the utilization of the LM in our approach is more effective.",
"Under the Transformer big setting, where the performances of existing methods are limited, our method can still bring the improvement of +0.81 and +0.82 BLEU scores on the En-De and Zh-En, which demonstrates the superiority of CBMI under stronger baselines.",
"In this section, we provide in-depth analyses on the effectiveness of our CBMI and conduct experiments on the validation set of WMT14 En-De with the Transformer base model.",
"We take the Transformer base as baseline, and then apply adaptive training based on the token-level CBMI, the sentence-level CBMI, and both of them, respectively.",
"Results are listed in Table",
"2. We observe certain improvements (+0.29 and +0.44 BLEU scores) when separately applying the token-and sentence-level CBMI based approaches.",
"It suggests that our CBMI can measure the token importance from different granularities, and up-weight 2383 Model BLEU Transformer base 26.24 + token-level CBMI 26.53 (+0.29) + sentence-level CBMI 26.68 (+0.44) + token& sentence-level CBMI 26.78 (+0.54) Table 2: BLEU scores (%) of CBMI at different granularities on the validation set of WMT14 En-De.",
"the important tokens or sentence pairs can improve translation quality.",
"Furthermore, the combination of both the tokenand sentence-level CBMI brings further improvement (+0.55 BLEU scores), which illustrates that the CBMI in different granularities are complementary and have cumulative gains.",
"In this section, we compare our CBMI-based approach with the BMI-based adaptive training in terms of the number of trainable parameters, the CPU computational costs of pre-processing, the GPU computational costs of training, and disk cost for storing intermediate variables.",
"As shown in Table 3, the vanilla BMI-based approach requires additional 12 CPU hours to obtain the BMI values during the pre-processing stage, and about 2.0 GB of disk space to store these BMI values.",
"To make matters worse, the costs of CPU calculation and disk storage will increase dozens of times (approx-imately equal to the average length of target sentences) when conducting the target-context-aware calculations for BMI.",
"In contrast, our CBMI-based approach gets rid of the CPU computational costs, and thus has no additional storage overhead.",
"Although we introduce an additional LM to calculate the CBMI values, it only brings a slight increase of model parameters and GPU calculation cost during model training.",
"Particularly, our proposed method simply modifies the training loss of NMT, and thus has no effect on the inference speed.",
"In short, our CBMI can be efficiently calculated during model training without any pre-specific statistical calculations and storage overhead, which makes it feasible to supplement target context information for statistical metrics.",
"To verify whether our CBMI measurement is indeed highly related to the translation adequacy of NMT models, as we conjectured in Section 3.2, we conduct the human evaluation in terms of adequacy and fluency.",
"We randomly sample 100 sentences Method Pre-process #Params Train Disk (hour) (M) (hour) (GB) Transformer base 0 65 10 0 + BMI 12 65 11 2.0 + target context 12 N 65 2.0 N + CBMI 0 101 12 0 Table 3: The costs of calculation and storage of the BMIand CBMI-based approaches on the WMT14 EnDe (100k training steps).",
"from the test set of WMT19 Zh-En and invite three annotators to evaluate the translation adequacy and fluency.",
"Scores for both indexes are limited in [1,5].",
"For adequacy, 1' represents irrelevant to the source sentence and 5' represents semantically equal.",
"For fluency, 1' means unintelligible and 5' means fluent and native.",
"We finally average the scores from three annotators and list the results in Table",
"4. We observe that our approach significantly promotes the translation adequacy of the Transformer base baseline, and meanwhile slightly promotes the translation fluency.",
"It indicates that the CBMI measurement is highly related to the adequacy of NMT models, and focusing more on the tokens with high CBMI can improve translation adequacy, and thus improve translation performance.",
"Given that CBMI reflects the dependency between a target token and its source sentence on the condition of its target context, in this section, we explore whether CBMI can serve as an indicator for selecting an appropriate prior distribution to improve the NMT model.",
"Prior distributions have been proved for their ability to provide additional knowledge for models (Baziotis et al., 2020; Li et al., 2020).",
"Thus we try three generated distributions as prior distributions for NMT models, i.e. , the translation model distribution (TM prior), the language model distribution (LM prior), and the softmax normalized CBMI distribution (CBMI prior).",
"these distributions according to different tokens and surprisingly observe that the accuracies are highly related to the CBMI values of tokens.",
"As shown in Figure 4, the most accurate prior for target tokens with different CBMI values is not always consistent.",
"Based on this observation, we further design a CBMI-based prior selection strategy to choose the best prior distribution for each token.",
"The details of the selection strategy are seen in Appendix D. As shown in Table 5, all these prior distributions can provide helpful guidance and enhance the baseline model.",
"More importantly, the CBMI-based prior selection strategy can achieve a better performance compared with the single prior, demonstrating that CBMI also serves as an appropriate indicator for the translation prior selection.",
"We will explore the more sophisticated CBMI-based prior selection strategy in the future work.",
"the information in language models is a common solution to improve NMT models.",
"In low-resource scenarios, LMs trained on extra monolingual data are usually more informative and thus used to fuse with NMT prediction (Gulcehre et al., 2015, 2017; Sriram et al., 2017; Stahlberg et al., 2018), provide prior knowledge for NMT models (Baziotis et al., 2020) and enhance representations of NMT (Clinchant et al., 2019; Zhu et al., 2020).",
"In data augmentation methods, LMs are also widely used to generate contextual substitutions of words in sentences (Kobayashi, 2018; Wu et al., 2018; Gao et al., 2019).",
"Differently, all the aforementioned methods rely on the LMs that are trained on extra data, while the LM in our method does not require extra data and also has no influence on the inference speed.",
"In this paper, we propose a target-context-aware metric for target tokens, named conditional bilingual mutual information (CBMI).",
"Compared with previous statistical metrics, our CBMI only increases limited computational costs to incorporate the target context and provides a more suitable assessment for tokens.",
"Furthermore, based on the tokenand sentence-level CBMI, we design a CBMI-based adaptive training strategy to amply the contributions of the important tokens.",
"Experimental results on two WMT tasks demonstrate the effectiveness of our proposed approach.",
"Further analyses show that CBMI can improve translation adequacy and serve as an appropriate indicator for the translation prior selection.",
"The research work descried in this paper has been supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976016, 61976015, and 61876198).",
"The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"result",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"objective",
"method",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Knowledge graph completion (KGC) aims to reason over known facts and infer the missing links.",
"Text-based methods such as KG-BERT (Yao et al., 2019) learn entity representations from natural language descriptions, and have the potential for inductive KGC.",
"However, the performance of text-based methods still largely lag behind graph embedding-based methods like TransE (Bordes et al., 2013) and RotatE (Sun et al., 2019b).",
"In this paper, we identify that the key issue is efficient contrastive learning.",
"To improve the learning efficiency, we introduce three types of negatives: in-batch negatives, pre-batch negatives, and self-negatives which act as a simple form of hard negatives.",
"Combined with InfoNCE loss, our proposed model SimKGC can substantially outperform embedding-based methods on several benchmark datasets.",
"In terms of mean reciprocal rank (MRR), we advance the state-of-the-art by +19% on WN18RR, +6.8% on the Wikidata5M transductive setting, and +22% on the Wikidata5M inductive setting.",
"Thorough analyses are conducted to gain insights into each component.",
"Our code is available at https://github.com/ intfloat/SimKGC .",
"Large-scale knowledge graphs (KGs) are important components for knowledge-intensive applications, such as question answering (Sun et al., 2019a), recommender systems (Huang et al., 2018), and intelligent conversational agents (Dinan et al., 2019) etc.",
"KGs usually consist of a set of triples ( h , r , t ), where h is the head entity, r is the relation, and t is the tail entity.",
"Popular public KGs include Freebase (Bollacker et al., 2008), Wikidata (Vran-decic and Krtzsch, 2014), YAGO (Suchanek et al., 2007), ConceptNet (Speer et al., 2017), and WordNet (Miller, 1992) etc.",
"Despite their usefulness Work done while at Yuanfudao AI Lab.",
"in practice, they are often incomplete.",
"Knowledge graph completion (KGC) techniques are necessary for the automatic construction and verification of knowledge graphs.",
"Existing KGC methods can be categorized into two families: embedding-based and text-based methods.",
"Embedding-based methods map each entity and relation into a low-dimensional vector, without using any side information such as entity descriptions.",
"This family includes TransE (Bor-des et al., 2013), TransH (Wang et al., 2014), RotatE (Sun et al., 2019b), and TuckER (Balaze-vic et al., 2019) etc.",
"By comparison, text-based methods (Yao et al., 2019; Xie et al., 2016; Wang et al., 2021c) incorporate available texts for entity representation learning, as shown in Figure 1.",
"Intuitively, text-based methods should outperform embedding-based counterparts since they have access to additional input signals.",
"However, results on popular benchmarks (e.g., WN18RR, FB15k-237, Wikidata5M) tell a different story: text-based methods still lag behind even with pre-trained language models.",
"We hypothesize that the key issue for such performance degradation is the inefficiency in contrastive learning.",
"Embedding-based methods do not involve the expensive computation of text en-4281 coders and thus can be extremely efficient to train with a large negative sample size.",
"For example, the default configuration of RotatE 1 trains 1000 epochs with a negative sample size of 64 on the Wikidata5M dataset.",
"While the text-based method KEPLER (Wang et al., 2021c) can only train 30 epochs with a negative sample size of 1 due to the high computational cost incurred by RoBERTa.",
"In this paper, inspired by the recent progress on contrastive learning, we introduce three types of negatives to improve the text-based KGC method: in-batch negatives, pre-batch negatives, and self-negatives.",
"By adopting bi-encoder instead of cross-encoder (Yao et al., 2019) architecture, the number of in-batch negatives can be increased by using a larger batch size.",
"Vectors from previous batches are cached and act as pre-batch negatives (Karpukhin et al., 2020).",
"Additionally, mining hard negatives can be beneficial for improving contrastive learning.",
"We find that the head entity itself can serve as hard negatives, which we call self-negatives.",
"As a result, the negative sample size can be increased to the scale of thousands.",
"We also propose to change the loss function from margin-based ranking loss to InfoNCE, which can make the model focus on hard negatives.",
"One advantage of text-based methods is that they enable inductive entity representation learning.",
"Entities that are not seen during training can still be appropriately modeled, while embedding-based methods like TransE can only reason under the transductive setting 2 .",
"Inductive knowledge graph completion is important in the real world as new entities are coming out every day.",
"Moreover, text-based methods can leverage state-of-the-art pre-trained language models to learn better representations.",
"A line of recent work (Shin et al., 2020; Petroni et al., 2019) attempts to elicit the implicitly stored knowledge from BERT.",
"The task of KGC can also be regarded as a way to retrieve such knowledge.",
"Two entities are more likely to be related if connected by a short path in the graph.",
"Empirically, we find that text-based models heavily rely on the semantic match and ignore such topological bias to some degree.",
"We propose a simple re-ranking strategy by boosting the scores of the head entity's k -hop neighbors.",
"conducting experiments on three popular benchmarks: WN18RR, FB15k-237, and Wikidata5M (both transductive and inductive settings).",
"According to the automatic evaluation metrics (MRR, Hits@{1,3,10}), SimKGC outperforms state-of-the-art methods by a large margin on the WN18RR (MRR 47 . 6 66 . 6 ), Wikidata5M transductive setting (MRR 29 . 0 35 . 8 ), and inductive setting (MRR 49 . 3 71 . 4 ).",
"On the FB15k-237 dataset, our results are also competitive.",
"To help better understand our proposed method, we carry out a series of analyses and report human evaluation results.",
"Hopefully, SimKGC will facilitate the future development of better KGC systems.",
"Knowledge Graph Completion involves modeling multi-relational data to aid automatic construction of large-scale KGs.",
"In translation-based methods such as TransE (Bordes et al., 2013) and TransH (Wang et al., 2014), a triple ( h , r , t ) is a relation-specific translation from the head entity h to tail entity t .",
"Complex number embeddings are introduced by Trouillon et al. (2016) to increase the model's expressiveness.",
"RotatE (Sun et al., 2019b) models a triple as relational rotation in complex space.",
"Nickel et al. (2011); Balazevic et al. (2019) treat KGC as a 3-D binary tensor factorization problem and investigate the effectiveness of several factorization techniques.",
"Some methods attempt to incorporate entity descriptions.",
"DKRL (Xie et al., 2016) uses a CNN to encode texts, while KG-BERT (Yao et al., 2019), StAR (Wang et al., 2021a), and BLP (Daza et al., 2021) both adopt pre-trained language models to compute entity embeddings.",
"GraIL (Teru et al., 2020) and BERTRL (Zha et al., 2021) conduct inductive relation prediction by utilizing subgraph or path information.",
"In terms of benchmark performance (Wang et al., 2021c), text-based methods still underperform methods like RotatE.",
"Pre-trained Language Models including BERT (Devlin et al., 2019), GPT (Radford et al., 2018), and T5 (Raffel et al., 2019) have led to a learning paradigm shift in NLP.",
"Models are first pre-trained on large amounts of unlabeled text corpora with language modeling objectives, and then fine-tuned on downstream tasks.",
"Considering their good performance in few-shot and even zero-shot 4282 scenarios (Brown et al., 2020), one interesting question is: Can pre-trained language models be used as knowledge bases?",
"Petroni et al. (2019) proposed to probe language models with manually designed prompts.",
"A series of following work (Shin et al., 2020; Zhong et al., 2021; Jiang et al., 2020) focus on finding better prompts to elicit the knowledge implicitly stored in the model parameters.",
"Another line of work (Zhang et al., 2019; Liu et al., 2020; Wang et al., 2021c) injects symbolic knowledge into language model pre-training, and shows some performance boost on several knowledge-intensive tasks.",
"Contrastive Learning learns useful representations by contrasting between positives and negatives (Le-Khac et al., 2020).",
"The definitions of positives and negatives are task-specific.",
"In self-supervised vision representation learning (Chen et al., 2020; He et al., 2020; Grill et al., 2020), a positive pair is two augmented views of the same image, while a negative pair is two augmented views of different images.",
"Recently, contrastive learning paradigm has witnessed great successes in many different fields, including multi-modal pre-training (Radford et al., 2021), video-text retrieval (Liu et al., 2021), and natural language understanding (Gunel et al., 2021) etc.",
"In the NLP community, by leveraging the supervision signals from natural language inference data (Gao et al., 2021), QA pairs (Ni et al., 2021), and parallel corpora (Wang et al., 2021b), these methods have surpassed non-contrastive methods (Reimers and Gurevych, 2019) on semantic similarity benchmarks.",
"Karpukhin et al. (2020); Qu et al. (2021); Xiong et al. (2021) adopt contrastive learning to improve dense passage retrieval for open-domain question answering, where the positive passages are the ones containing the correct answer.",
"A knowledge graph G is a directed graph, where the vertices are entities E , and each edge can be represented as a triple ( h , r , t ), where h , r , and t correspond to head entity, relation, and tail entity, respectively.",
"The link prediction task of KGC is to infer the missing triples given an incomplete G .",
"Under the widely adopted entity ranking evaluation protocol, tail entity prediction ( h , r , ? ) requires ranking all entities given h and r , similarly for head entity prediction ( ? , r , t ).",
"In this paper, for each triple ( h , r , t ), we add an inverse triple ( t , r 1 , h ), where r 1 is the inverse relation of r .",
"Based on such reformulation, we only need to deal with the tail entity prediction problem (Malaviya et al., 2020).",
"Our proposed model SimKGC adopts a bi-encoder architecture.",
"Two encoders are initialized with the same pre-trained language model but do not share parameters.",
"Given a triple ( h , r , t ), the first encoder BERT hr is used to compute the relation-aware embedding for the head entity h .",
"We first concatenate the textual descriptions of entity h and relation r with a special symbol [SEP] in between.",
"BERT hr is applied to get the last-layer hidden states.",
"Instead of directly using the hidden state of the first token, we use mean pooling followed by L 2 normalization to get the relation-aware embedding e hr , as mean pooling has been shown to result in better sentence embeddings (Gao et al., 2021; Reimers and Gurevych, 2019).",
"e hr is relation-aware since different relations will have different inputs and thus have different embeddings, even though the head entity is the same.",
"Similarly, the second encoder BERT t is used to compute the L 2 -normalized embedding e t for the tail entity t .",
"The input for BERT t only consists of the textual description for entity t .",
"Since the embeddings e hr and e t are both L 2 normalized, the cosine similarity cos( e hr , e t ) is simply the dot product between two embeddings: cos( e hr , e t ) = e hr e t (cid:107) e hr (cid:107)(cid:107) e t (cid:107) = e hr e t (1) For tail entity prediction ( h , r , ? ), we compute the cosine similarity between e hr and all entities in E , and predict the one with the largest score: argmax t i cos( e hr , e t i ) , t i E (2) 3.3 Negative Sampling For knowledge graph completion, the training data only consists of positive triples.",
"Given a positive triple ( h , r , t ), negative sampling needs to sample one or more negative triples to train discriminative models.",
"Most existing methods randomly corrupt h or t and then filter out false negatives that appear in the training graph G .",
"The 4283 negatives for different triples are not shared and therefore independent.",
"The typical number of negatives are 64 for embedding-based methods (Sun et al., 2019b), and 5 for text-based methods (Wang et al., 2021a).",
"We combine three types of negatives to improve the training efficiency without incurring significant computational and memory overhead.",
"In-batch Negatives (IB) This is a widely adopted strategy in visual representation learning (Chen et al., 2020) and dense passage retrieval (Karpukhin et al., 2020) etc.",
"Entities within the same batch can be used as negatives.",
"Such in-batch negatives allow the efficient reuse of entity embeddings for bi-encoder models.",
"Pre-batch Negatives (PB) The disadvantage of in-batch negatives is that the number of negatives is coupled with batch size.",
"Pre-batch negatives (Lee et al., 2021) use entity embeddings from previous batches.",
"Since these embeddings are computed with an earlier version of model parameters, they are not consistent with in-batch negatives.",
"Usually, only 1 or 2 pre-batches are used.",
"Other methods like MoCo (He et al., 2020) can also provide more negatives.",
"We leave the investigation of MoCo as future work.",
"Self-Negatives (SN) Besides increasing the number of negatives, mining hard negatives (Gao et al., 2021; Xiong et al., 2021) is also important for improving contrastive representation learning.",
"For tail entity prediction ( h , r , ? ), text-based methods tend to assign a high score to the head entity h , likely due to the high text overlap.",
"To mitigate this issue, we propose self-negatives that use the head entity h as hard negatives.",
"Including self-negatives can make the model rely less on the spurious text match.",
"We use NIB , NPB , and NSN to denote the aforementioned three types of negatives.",
"During training, there may exist some false negatives.",
"For example, the correct entity happens to appear in another triple within the same batch.",
"We filter out such entities with a binary mask 3 .",
"Combining them all, the collection of negatives N ( h, r ) is: { t (cid:48) | t (cid:48) NIB NPB NSN , ( h, r, t (cid:48) ) / G} (3) 3 False negatives that do not appear in the training data will not be filtered.",
"Assume the batch size is 1024 , and 2 pre-batches are used, we would have |N IB | = 1024 1 , |N PB | = 2 1024 , |N SN | = 1 , and |N ( h, r ) | = 3072 negatives in total.",
"Knowledge graphs often exhibit spatial locality.",
"Nearby entities are more likely to be related than entities that are far apart.",
"Text-based KGC methods are good at capturing semantic relatedness but may not fully capture such inductive bias.",
"We propose a simple graph-based re-ranking strategy: increase the score of candidate tail entity t i by 0 if t i is in k -hop neighbors E k ( h ) of the head entity h based on the graph from training set: argmax t i cos( e hr , e t i ) + 1 ( t i E k ( h )) (4) 3.5 Training and Inference During training, we use InfoNCE loss with additive margin (Chen et al., 2020; Yang et al., 2019): L = log e ( ( h,r,t ) ) / e ( ( h,r,t ) ) / + (cid:80) |N| i =1 e ( h,r,t (cid:48) i ) / (5) The additive margin > 0 encourages the model to increase the score of the correct triple ( h , r , t ).",
"( h, r, t ) is the score function for a candidate triple, here we define ( h, r, t ) = cos( e hr , e t ) [ 1 , 1] as in Equation 1.",
"The temperature can adjust the relative importance of negatives, smaller makes the loss put more emphasis on hard negatives, but also risks over-fitting label noise.",
"To avoid tuning as a hyperparameter, we re-parameterize log 1 as a learnable parameter.",
"For inference, the most time-consuming part is O ( |E| ) BERT forward pass computation of entity embeddings.",
"Assume there are |T | test triples.",
"For each triple ( h , r , ? ) and ( t , r 1 , ? ), we need to compute the relation-aware head entity embedding and use a dot product to get the ranking score for all entities.",
"In total, SimKGC needs |E| + 2 |T | BERT forward passes, while cross-encoder models like KG-BERT (Yao et al., 2019) needs |E| 2 |T | .",
"Being able to scale to large datasets is important for practical usage.",
"For bi-encoder models, we can precompute the entity embeddings and retrieve top-k entities efficiently with the help of fast similarity search tools like Faiss (Johnson et al., 2021).",
"Datasets We use three datasets for evaluation: WN18RR, FB15k-237, and Wikidata5M (Wang et al., 2021c).",
"The statistics are shown in Table 1.",
"Bordes et al. (2013) proposed the WN18 and FB15k datasets.",
"Later work (Toutanova et al., 2015; Dettmers et al., 2018) showed that these two datasets suffer from test set leakage and released WN18RR and FB15k-237 datasets by removing the inverse relations.",
"The WN18RR dataset consists of 41 k synsets and 11 relations from WordNet (Miller, 1992), and the FB15k-237 dataset consists of 15 k entities and 237 relations from Freebase.",
"The Wikidata5M dataset is much larger in scale with 5 million entities and 20 million triples.",
"It provides two settings: transductive and inductive.",
"For the transductive setting, all entities in the test set also appear in the training set, while for the inductive setting, there is no entity overlap between train and test set.",
"We use Wikidata5M-Trans and Wikidata5M-Ind to indicate these two settings.",
"For textual descriptions, we use the data provided by KG-BERT (Yao et al., 2019) for WN18RR and FB15k-237 datasets.",
"The Wikidata5M dataset already contains descriptions for all entities and relations.",
"Evaluation Metrics Following previous work, our proposed KGC model is evaluated with entity ranking task: for each test triple ( h, r, t ) , tail entity prediction ranks all entities to predict t given h and r , similarly for head entity prediction.",
"We use four automatic evaluation metrics: mean reciprocal rank (MRR), and Hits@ k ( k { 1 , 3 , 10 }) (H@ k for short).",
"MRR is the average reciprocal rank of all test triples.",
"H@ k calculates the proportion of correct entities ranked among the topk .",
"MRR and H@ k are reported under the filtered setting (Bor-des et al., 2013), The filtered setting ignores the scores of all known true triples in the training, validation, and test set.",
"All metrics are computed by averaging over two directions: head entity prediction and tail entity prediction.",
"Hyperparameters The encoders are initialized with bert-base-uncased (English).",
"Using better pre-trained language models is expected to improve performance further.",
"Most hyperparameters except learning rate and training epochs are shared across all datasets to avoid dataset-specific tuning.",
"We conduct grid search on learning rate with ranges { 10 5 , 3 10 5 , 5 10 5 }.",
"Entity descriptions are truncated to a maximum of 50 tokens.",
"Temperature is initialized to 0 .",
"05 , and the additive margin for InfoNCE loss is 0 .",
"02 .",
"For re-ranking, we set = 0 .",
"05 .",
"2 pre-batches are used with logit weight 0 .",
"5 .",
"We use AdamW optimizer with linear learning rate decay.",
"Models are trained with batch size 1024 on 4 V100 GPUs.",
"For the WN18RR, FB15k-237, and Wikidata5M (both settings) datasets, we train for 50 , 10 , and 1 epochs, respectively.",
"Please see Appendix A for more details.",
"We reuse the numbers reported by Wang et al. (2021c) for TransE and DKRL, and the results for RotatE are from the official GraphVite 4 benchmark.",
"In Table 2 and 3, our proposed model SimKGC IB+PB+SN outperforms state-of-the-art methods by a large margin on the WN18RR, Wikidata5M-Trans, and Wikidata5M-Ind datasets, but slightly lags behind on the FB15k-237 dataset (MRR 33 . 6% vs 35 . 8% ).",
"To the best of our knowledge, SimKGC is the first text-based KGC method that achieves better results than embedding-based counterparts.",
"4 https://graphvite.io/docs/latest/benchmark 4285 Method Wikidata5M-Trans Wikidata5M-Ind MRR H@1 H@3 H@10 MRR H@1 H@3 H@10 embedding-based methods TransE (Bordes et al., 2013) 25.3 17.0 31.1 39.2 --RotatE (Sun et al., 2019b) 29.0 23.4 32.2 39.0 --text-based methods DKRL (Xie et al., 2016) 16.0 12.0 18.1 22.9 23.1 5.9 32.0 54.6 KEPLER (Wang et al., 2021c) 21.0 17.3 22.4 27.7 40.2 22.2 51.4 73.0 BLP-ComplEx (Daza et al., 2021) --48.9 26.2 66.4 87.7 BLP-SimplE (Daza et al., 2021) --49.3 28.9 63.9 86.6 SimKGC IB 35.3 30.1 37.4 44.8 60.3 39.5 77.8 92.3 SimKGC IB+PB 35.4 30.2 37.3 44.8 60.2 39.4 77.7 92.4 SimKGC IB+SN 35.6 31.0 37.3 43.9 71.3 60.7 78.7 91.3 SimKGC IB+PB+SN 35.8 31.3 37.6 44.1 71.4 60.9 78.5 91.7 Table 2: Main results for the Wikidata5M dataset.",
"We report results for various combinations of negatives.",
"With in-batch negatives only, the performance of SimKGC IB is already quite strong thanks to the large batch size (1024) we use.",
"Adding self-negatives tends to improve H@1 but hurt H@10.",
"We hypothesize that self-negatives make the model rely less on simple text match.",
"Thus they have negative impacts on metrics that emphasize recall, such as H@10.",
"Combining all three types of negatives generally has the best results but not always.",
"Compared to other datasets, the graph for the FB15k-237 dataset is much denser (average degree is 37 per entity), and contains fewer entities ( 15 k ).",
"To perform well, models need to learn generalizable inference rules instead of just modeling textual relatedness.",
"Embedding-based methods are likely to hold an advantage for this scenario.",
"It is possible to ensemble our method with embedding-based ones, as done by Wang et al. (2021a).",
"Since this is not the main focus of this paper, we leave it as future work.",
"Also, Cao et al. (2021) points out that many links in the FB15k-237 dataset are not predictable based on the available information.",
"These two reasons help explain the unsatisfactory performance of SimKGC.",
"Adding self-negatives is particularly helpful for the inductive setting of Wikidata5M dataset, with MRR rising from 60 .",
"3% to 71 .",
"3% .",
"For inductive KGC, text-based models rely more heavily on text match than the transductive setting.",
"Self negatives 4286 can prevent the model from simply predicting the given head entity.",
"In terms of inference time, the most expensive part is the forward pass with BERT.",
"For the Wikidata5M-Trans dataset, SimKGC requires 40 minutes to compute 4 .",
"6 million embeddings with 2 GPUs, while cross-encoder models such as KG-BERT (Yao et al., 2019) would require an estimated time of 3000 hours.",
"We are not the first work that enables fast inference, models such as ConvE (Dettmers et al., 2018) and StAR (Wang et al., 2021a) also share similar advantages.",
"Here we just want to re-emphasize the importance of inference efficiency and scalability when designing new models.",
"We conduct a series of analyses to gain further insights into our proposed model and the KGC task.",
"Compared to existing text-based methods, SimKGC makes two major changes: using more negatives, and switching from margin-based ranking loss to InfoNCE loss.",
"To guide the future work on knowledge graph completion, it is crucial to understand which factor contributes most to the superior performance of SimKGC.",
"In Table 4, we use SimKGC IB with batch size 256 as a baseline.",
"By reducing the number of negatives from 255 to 5 , MRR drops from 64 .",
"4 to 48 .",
"8 .",
"Changing the loss function from InfoNCE to the following margin loss makes MRR drop to 39 .",
"5 : 1 |N | |N| (cid:88) i =1 max(0 , + ( h, r, t (cid:48) i ) ( h, r, t )) (6) Consistent with Equation 5, ( h, r, t (cid:48) i ) is cosine similarity score for a candidate triple, and = 0 .",
"8 .",
"To summarize, both InfoNCE loss and a large number of negatives are important factors, while the loss function seems to have bigger impacts.",
"For InfoNCE loss, the hard negatives naturally contribute larger gradients, and adding more negatives can lead to more robust representations.",
"Wang and Liu (2021) also draws a similar conclusion: such hardness-aware property is vital for the success of contrastive loss.",
"changing the weight in Equation 6 from |N| to exp( s ( t (cid:48) i ) / ) (cid:80) |N| j =1 exp( s ( t (cid:48) j ) / ) , where s ( t (cid:48) i ) = max(0 , + ( h, r, t (cid:48) i ) ( h, r, t )) and = 0 .",
"05 .",
"Similar to InfoNCE loss, margin loss makes the model pay more attention to hard negatives and leads to better performance as shown in Table 4.",
"It is similar to the self-adversarial negative sampling proposed by Sun et al. (2019b).",
"Most hyperparameters are tuned based on InfoNCE loss.",
"We expect the margin loss to achieve better results with a bit more hyperparameter optimization.",
"In Figure 2, we quantitatively illustrate how MRR changes as more negatives are added.",
"There is a clear trend that the performance steadily improves from 48 .",
"8 to 67 .",
"1 .",
"However, adding more negatives requires more GPU memory and may cause optimization difficulties (You et al., 2020; Chen et al., 2020).",
"We do not experiment with batch size larger than 1024 .",
"Our proposed re-ranking strategy is a simple way to incorporate topological information in the knowledge graph.",
"For graphs whose connectivity patterns exhibit spatial locality, re-ranking is likely to help.",
"In Table 6, we see a slight but stable increase for all metrics on the Wikidata5M-Trans dataset.",
"Note that this re-ranking strategy does not apply to inductive KGC since entities in the test set never appear in the training data.",
"Exploring more effective ways such as graph neural networks (Wu et al., 2019) instead of simple re-ranking would be a future direction.",
"5.3 Fine-grained Analysis 1-1 1-n spouse capital of lake inflows head of government child has part notable work side effect n-1 n-n instance of place of birth given name work location cast member member of influenced by nominated for Table 7: Examples for different categories of relations on the Wikidata5M-Trans dataset.",
"We classify all relations into four categories based on the cardinality of head and tail arguments following the rules by Bordes et al. (2013): one-to-one(1-1), one-to-many(1-n), many-to-one(n-1), and many-to-many(n-n).",
"Examples are shown in Dataset 1-1 1-n n-1 n-n Wikidata5M-Trans 30.4 8.3 71.1 10.6 Wikidata5M-Ind 83.5 71.1 80.0 54.7 Table 8: MRR for different kinds of relations on the Wikidata5M dataset with SimKGC IB+PB+SN .",
"Table 7.",
"As shown in Table 8, predicting the n side is generally more difficult, since there are many seemingly plausible answers that would confuse the model.",
"Another main reason is the incompleteness of the knowledge graph.",
"Some predicted triples might be correct based on human evaluation, especially for 1-n relations in head entity prediction, such as instance of, place of birth etc.",
"In Table 5, for the first example, Marbletown, Ulster County, and New York are both correct answers.",
"The second example illustrates the case for relation place of birth: a lot of people share the same place of birth, and some triples may not exist in the knowledge graph.",
"This helps explain the low performance of 1-n relations for the Wikidata5M-Trans dataset.",
"In the third example, SimKGC predicts a closely related but incorrect entity http server.",
"The analyses above suggest that automatic evaluation metrics such as MRR tend to underestimate the model's performance.",
"To have a more accurate estimation of the performance, we conduct human evaluation and list the results in Table 9.",
"An average of 49% of the wrong predictions according to H@1 are correct according to human annotators.",
"If we take this into account, the H@1 of our proposed model would be much higher.",
"How to accurately 4288 correct wrong unknown ( h , r , ? ) 24% 54% 22% ( ? , r , t ) 74% 14% 12% Avg 49% 34% 17% Table 9: Human evaluation results on the Wikidata5M-Trans dataset.",
"measure the performance of KGC systems is also an interesting future research direction.",
"To examine our proposed model qualitatively, we visualize the entity embeddings from 8 largest categories 5 with 50 randomly selected entities per category.",
"Entity embeddings are computed with BERT t in Section 3.2.",
"In Figure 3, different categories are well separated, demonstrating the high quality of the learned embeddings.",
"One interesting phenomenon is that the two categories Commu-nity and Village have some overlap.",
"This is reasonable since these two concepts are not mutually exclusive.",
"5 We utilize the instance of relation to determine the entity category.",
"efficient contrastive learning.",
"Leveraging the recent progress in the field of contrastive learning, SimKGC adopts a bi-encoder architecture and combines three types of negatives.",
"Experiments on the WN18RR, FB15k-237, and Wikidata5M datasets show that SimKGC substantially outperforms state-of-the-art methods.",
"For future work, one direction is to improve the interpretability of SimKGC.",
"In methods like RotatE (Sun et al., 2019b) and TransE (Bordes et al., 2013), a triple can be modeled as rotation in complex space or relational translation, while SimKGC does not enable such easy-to-understand interpretations.",
"Another direction is to explore effective ways to deal with false negatives (Huynh et al., 2020) resulting from the incompleteness of knowledge graphs.",
"Future work could use SimKGC as a solid baseline to keep improving text-based knowledge graph completion systems.",
"Our experimental results and analyses also reveal several promising research directions.",
"For example, how to incorporate global graph structure in a more principled way?",
"Are there other loss functions that perform better than the InfoNCE loss?",
"For knowledge-intensive tasks such as knowledge base question answering (KBQA), information retrieval, and knowledge-grounded response generation, etc., it would be interesting to explore the new opportunities brought by the improved knowledge graph completion systems.",
"We would like to thank anonymous reviewers and area chairs for their valuable comments, and ACL Rolling Review organizers for their efforts."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Generic summaries try to cover an entire document and query-based summaries try to answer document-specific questions.",
"But real users' needs often fall in between these extremes and correspond to aspects, high-level topics discussed among similar types of documents.",
"In this paper, we collect a dataset of realistic aspect-oriented summaries, ASPECTNEWS , which covers different subtopics about articles in news sub-domains.",
"We annotate data across two domains of articles, earthquakes and fraud investigations, where each article is annotated with two distinct summaries focusing on different aspects for each domain.",
"A system producing a single generic summary cannot concisely satisfy both aspects.",
"Our focus in evaluation is how well existing techniques can generalize to these domains without seeing in-domain training data, so we turn to techniques to construct synthetic training data that have been used in query-focused summarization work.",
"We compare several training schemes that differ in how strongly keywords are used and how oracle summaries are extracted.",
"Our evaluation shows that our final approach yields",
"(a) focused summaries, better than those from a generic summarization system or from keyword matching;",
"(b) a system sensitive to the choice of keywords.",
"1 1 Introduction Recent progress in text summarization (See et al., 2017; Liu and Lapata, 2019; Zhang et al., 2020a; Lewis et al., 2020) has been supported by the availability of large amounts of supervised data, such as the CNN/Daily Mail and XSum datasets (Hermann et al., 2015; Narayan et al., 2018), which provide a single, generic, topic-agnostic summary.",
"However, a document often contains different aspects (Titov and McDonald, 2008; Woodsend and Lapata, 2012) that might be relevant to different users.",
"For 1 Code is available at https://github.com/oja/ aosumm example, a political science researcher studying responses to earthquakes may want a summary with information about government-led recovery efforts and broader social impacts, not a high-level generic summary of what happened.",
"Systems should be able to produce summaries tailored to the diverse information needs of different users.",
"Crucially, these systems should be usable in realistic settings where a user is interested in vague aspects of the document, instead of a highly focused query.",
"In this work, we present a new dataset for evaluating single-document aspect-oriented extractive summarization which we call ASPECTNEWS .",
"We derive subsets of examples from CNN/Daily Mail following certain topics, namely earthquakes and fraud reports.",
"These domains are special in that the articles within them have several aspects which are repeatedly mentioned across articles and form coherent topics, e.g., impact on human lives of an earthquake.",
"We ask annotators to select sentences relevant to such information needs, which correspond to imagined use cases.",
"Interannotator agreement on full summaries is low due to the inherent subjectivity of the task, so rather than coming up with a consensus summary, we instead primarily evaluate against soft labels based on the fraction of annotators selecting a given sentence.",
"To benchmark performance on this dataset, we build a system that can summarize a document conditioned on certain aspect-level keywords without assuming annotated training data for those aspects.",
"Since there are no large-scale supervised training sets suitable for this purpose, we explore methods to generate aspect-oriented training data from generic summaries.",
"We compare these with past approaches (Frermann and Klementiev, 2019) on their ability to adapt to our aspect-oriented setting, which requires taking aspectual keyword inputs (as opposed to specific entities or queries) and being appropriately sensitive to these keywords.",
"1. At least 42 people have died with hundreds more injured after a 6.2-magnitude earthquake hit Indonesia's Sulawesi island early Friday, according to Indonesia's Disaster Management Agency.",
"2. The epicenter of the quake, which struck at 1:28 a.m.",
"Jakarta time, was 6 kilometers (3.7 miles) northeast of the city of Majene, at a depth of 10 kilometers (6.2 miles), according to Indonesia's Meteorology, Climatology and Geophysics Agency.",
"3. Thirty-four people died in the city of Mamuju, to the north of the epicenter, while another eight died in Majene.",
"4. In Majene, at least 637 were injured and 15,000 residents have been displaced, according to [] 7. Many people are still trapped under collapsed buildings, according to local search and rescue teams.",
"8. Rescuers search for survivors at a collapsed building in Mamuju city in Indonesia.",
"9. Our priority is saving victims who are still buried under the buildings,\" Safaruddin Sanusi, head of West Sulawesi's Communications and Information Department, told CNN Friday. [] 12. Mostof the people in Mamuju city are now displaced. They are afraid to stay at their houses. 15.",
"We need more extrication equipment and more personnel to work fast on saving victims trapped under the building.",
"Generic Geo Recovery 1 2 3 4 7 8 9 12 15 Figure 1: Examples of an earthquake-related article paired with extractive summaries from the CNN/DM dataset.",
"and the SPACE dataset (Angelidis et al., 2021) find that our model produces summaries that score higher on agreement with human aspect-oriented annotations than generic summarization models, previous aspect-oriented models, and baselines such as keyword matching.",
"Second, we find that the summaries our model generates are sensitive to the choice of keywords.",
"Third, we find that our model performs competitively with leading models on the SPACE dataset in the multi-document setting.",
"Finally, we find that abstractive query-focused systems (He et al., 2020) hallucinate significantly in this setting, justifying our choice of an extractive framework here.",
"Relatively little recent work has focused on aspect-oriented summarization.",
"One line of research focuses on summarization of documents with respect to specific queries (Baumel et al., 2014; Krishna and Srinivasan, 2018; Frermann and Klementiev, 2019; He et al., 2020; Xu and Lapata, 2020a).",
"However, a query such as What facilities were damaged in the Oaxacan region? is a document specific query, which cannot be applied to other earthquake news articles and bears more resemblance to the task of long-form question answering (Fan et al., 2019).",
"Our focus is closer to work on attribute extraction from opinions or reviews (Dong et al., 2017; Angelidis and Lapata, 2018), as factors like geographic details and recovery efforts are usually mentioned in many earthquake stories.",
"Recent work has also begun to study summarization from an interactive perspective (Shapira et al., 2021); our approach could be naturally extended in this direction.",
"Methods Historically, most work on query-focused summarization has addressed the multi-document setting.",
"You et al. (2011) apply regression models to this task, and Wei et al. (2008) approach the problem from the perspective of ranking sentences by their similarity to the query.",
"These classic methods rely integrally on the multi-document setting, and so cannot be easily adapted to our setup.",
"More recently, Xu and Lapata (2020b) focus on multi-document summarization by modeling the applicability of candidate spans to both the query and their suitability in a summary.",
"Angelidis et al. (2021) explore a method using quantized transformers for aspect-oriented summarization, which we compare to.",
"Datasets There are several differences between ASPECTNEWS and other existing aspect-oriented summarization datasets.",
"Firstly, ASPECTNEWS focuses on single-document summarization, while similar aspect-oriented datasets such as the SPACE dataset of reviews (Angelidis et al., 2021) and other attribute extraction settings (Dong et al., 2017; Angelidis and Lapata, 2018) are multi-document.",
"Second, our dataset focuses on generalization to new aspect types , rather than assuming we've trained on data with those same aspects; that is, how can we produce appropriate aspect-oriented summaries of earthquake articles even if we have not trained on any?",
"Third, compared to query-focused settings, our aspect-oriented dataset is closer to the actual information needs of users, since users are often interested in summaries about broad subtopics rather than specific queries.",
"2 https://tac.nist.gov/2011/ Summarization",
"propose guided summarization tasks that involve similar aspects.",
"However, each article cluster in TAC has a single, fixed set of aspects that don't differ substantially from what a generic summary should capture.",
"The DUC 2005/2006 task (Dang, 2005) does not have aspects but rather can accept a granularity level at which to produce the summary.",
"Christensen et al. (2014) produce a hierarchy of relatively short summaries among multiple documents.",
"Other previous work (He et al., 2020; Xu and Lapata, 2020a; Tan et al., 2020) proposes constructing keyword sets for each individual document for training.",
"Krishna and Srinivasan (2018); Frermann and Klementiev (2019) condition on topic tokens referring to the topic tags in metadata.",
"Compared to these other approaches, we focus more on evaluation of aspects, as opposed to a purely keyword-and query-driven view.",
"We begin by considering our target application: users who have specific information needs that they want to be satisfied.",
"This consideration broadly falls under the category of purpose factors defined by Jones (1998) and should be accounted for in the summarization process.",
"Our data collection process involves the following steps: (1) Identifying clusters of articles in our target domains from a large corpus of news summaries.",
"(2) Manually specifying multiple user intents per target domain, representing the aspect of the summarization process.",
"(3) Crowdsourcing annotation of extractive summaries in these domains based on the user intents.",
"We draw our datasets from the English-language CNN/Daily Mail summarization dataset (Hermann et al., 2015).",
"We manually identified two domains, earthquakes and fraud , based on inspecting clusters of articles in these domains.",
"These two domains are ideal for two reasons.",
"First, they contain a significant number of on-topic articles (over 200) after careful filtering.",
"Second, the articles in these domains are reasonably homogeneous: each article would often feature at least broadly similar information about an event, making aspect-based summarization well-defined in these cases.",
"3 Although not completely universal, most earthquake articles refer to some information about each of two aspects here: geography (GEO ) and recovery (RECV ).",
"Figure 1 shows an example of an earthquake-related article.",
"Similarly, most fraud articles include information about the penalty (PEN ) imposed for the fraud, and the nature (NATURE ) of the fraud.",
"To retrieve our examples from these two domains, we first encode each article in CNN/DM corpus C with a text encoder E .",
"We adopt the Universal Sentence Encoder (Cer et al., 2018) for its efficiency and robustness.",
"We create an exemplar sentence for each domain to serve as the target to retrieve the most relevant content.",
"We describe the choice of exemplar sentences in Section A.2.",
"We measure the similarity of each candidate article c and the exemplar sentence s as the average of the cosine similarity between each of the candidate article's sentences c i and the exemplar, sim ( c, s ) = 1 n (cid:80) n i =1 cos( E ( c i ) , E ( s )) .",
"We found this procedure to be more robust than simple keyword matching for retrieving articles with coherent aspects; for example, keyword matching for earthquakes resulted in returning articles primarily about tsunamis due to the imbalanced data distribution.",
"3 By contrast, other domains like legislation were too heterogeneous: articles about passing a bill may focus on different aspects of a bill's journey, comments or quotes by elected officials, impact of the legislation, or other factors.",
"We could not come up with a plausible unified information need for the sorts of articles available in this dataset, although our eventual system can be applied to such documents if given appropriate guidance.",
"With these two domains, we examine our dataset to derive aspects that simulate realistic information needs of users.",
"Table 1 describes the domain, aspect, annotation prompt and keywords used for evaluation.",
"For each domain, we establish two aspects.",
"Each aspect must be well-represented in the corpus and easy to understand by both readers and annotators.",
"The authors annotated these aspects based on inspection of the articles and brainstorming about user intents based on scenarios.",
"For example, the penalty scenario was motivated by a real use case derived from the authors' colleagues investigating reporting of wrongdoing in news articles at scale, where summarization can be used to triage information.",
"Finally, to construct actual extractive summaries for evaluation in these domains, we presented the user intents to annotators on Amazon Mechanical Turk.",
"An annotator is shown a description of intent from Table 1 along with an article and is asked to identify a few sentences from the article that constitute a summary.",
"They can rate each sentence on a scale from 0 to 3 to account for some sentences being more relevant than others.",
"Their final summary, which they are shown to confirm before submitting, consists of all sentences rated with a score of at least 1. The exact prompt is shown in the Appendix.",
"Each article was truncated to 10 sentences for ease of annotation.",
"This assumption was reasonable for the two domains we considered, and the truncation approach has been used in See et al. (2017) without much performance degradation.",
"We found that annotators were unlikely to read a full length article due to the inherent lead bias in news articles, so this also helped simplify the task.",
"In order to maintain a high quality of annotations, we discard annotations that do not have at least a single selected sentence in common with at least a single other annotator on that sample.",
"In practice, this only discards a handful of isolated annotations.",
"In Table 2, we show the basic statistics of the collected dataset.",
"We show the distribution of the number of sentences agreed upon by the annotators in Table 3. We see that annotators somewhat agree in most cases, but relatively few sentences are uniformly agreed upon by all annotators.",
"Our initial # articles # sent # words PEN 100 2.90 30.5 NATURE 100 2.79 29.9 GEO 100 2.53 28.4 RECV 100 2.76 27.0 Table 2: Statistics for the collected datasets.",
"pilot studies also showed that annotators are often unsure where the cutoff is for information to be notable enough to include in a summary.",
"We therefore view this disagreement as inherent to the task, and preserve these disagreements in evaluation rather than computing a consensus summary.",
"We also compare the overlap between aspect-oriented annotation and generic extractive oracle derived from reference summaries from CNN/DM.",
"In Table 4, the similarity and exact match 4 between generic oracle summaries and the top 3 annotated sentences are fairly low, which means the annotated aspect driven summaries significantly differ from the standard extractive oracle.",
"Our aspect-oriented data collection works well to create labeled evaluation data, but it is difficult to scale to produce a large training set.",
"Identifying suitable domains and specifying user intents requires significant human effort, and collecting real test cases at scale would require a more involved user study.",
"We build an aspect-oriented model without gold-labeled aspect-oriented training data.",
"We do this by generating keywords for each article in CNN/DM, and training the model to learn the relationship between these keywords and a summary.",
"Our system follows broadly similar principles to He et al. (2020), but in an extractive setting.",
"We present a scheme to generate keywords for each document from the original dataset.",
"CNN/DM consists of pairs ( D, S ) of a document D and associated summary S .",
"We aim to augment these to form ( D, K, S (cid:48) ) triples with keywords K and a possibly modified summary S (cid:48) .",
"Our mixed augmentation technique requires training the model on both ( D, S ) and ( D, K, S (cid:48) ) for a given document.",
"We now describe the steps to create this data.",
"Keyword Extraction For each document in CNN/DM, we calculate the most important tokens in that document according to their TF-IDF ranking with respect to the entire corpus.",
"Of these tokens, we select the ones that are present in the reference summary.",
"This process selects tokens that are more likely to be consequential in affecting the output summary.",
"we need to derive extractive oracle summaries for training; these consist of sentence-level binary decisions E = E 1 , . . . , E m for each sentence.",
"Traditionally, this is done by finding a set of sentences that maximize ROUGE-2 (R2) with respect to the reference: argmax ER 2( E , S ) (Gillick and Favre, 2009; Nallapati et al., 2017).",
"However, training the model to predict P ( S 1 , . . . , S m | D, k ) , an extractive analogue of He et al. (2020), was insufficient for our extractive model to learn to be sensitive to keywords; it merely learned to return a good generic summary regardless of what keywords were given.",
"To instill stronger dependence on the keywords, we made two modifications to this process.",
"First, we modified the reference summary by concatenating the keywords with the reference summary before computing the extractive oracle summary.",
"This concatenation makes the oracle extraction more likely to select sentences containing the keywords, though modifying the reference summary requires maintaining a balance between the influence of keywords and of the original gold summary.",
"Second, we use BERTScore (Zhang et al., 2020b, BS) rather than ROUGE-2 to identify sentences that closely match the reference summary.",
"BERTScore turns out to boost the evaluation performance by a large margin, as shown in Table 12, so we use BERTScore for oracle extraction for all our experiments.",
"One reason for this is that the ROUGE-2 summaries favor exact keyword matches in selecting sentences, so the trained model simply learned to keyword matching in extreme cases.",
"Our final reference summary is therefore argmax EBS ( E , S + nK ) , where n is a hyper-parameter we discuss next.",
"Keyword Intensity To compute n , we introduce another parameter r that controls the ratio of keyword tokens to original reference summary tokens.",
"Higher values of r lead to extracting sentences in a manner more closely approximating keyword matching, but yielding poor standalone summaries.",
"On the other hand, lower values of r may lead to generic summaries insensitive to the keywords.",
"In practice, the number of times a keyword w is concatenated to the original summary S is defined as n = r len ( S ) #( keywords ) where len ( S ) is the number of tokens in the original summaries and #( keywords ) is the total number of keywords available.",
"When r = 1 , the concatenated keywords have the same length of the original summary.",
"Mixed Training We explore a variant of training where we include training data with multiple variants of each original document from the dataset.",
"Each document in the original dataset is mapped to two training samples, (1) a document without keywords and an unmodified oracle extractive summary, (2) a document with keywords and an oracle extractive summary using our modification procedure.",
"Our model is trained to predict a summary S from a document-keywords pair ( D, K ) .",
"Following BERTSUM (Liu and Lapata, 2019), we fine-tune BERT (Devlin et al., 2019) for extractive summarization using our modified CNN/Daily Mail dataset with keywords.",
"During training, we prepend a special token followed by the keywords to the original document, and use the modified oracle extractive summary as the gold outputs.",
"During inference, the keywords are user-defined.",
"This scheme is similar to He et al. (2020), but differs in that it is extractive.",
"We refer to this model, trained on our BERTScore references with the mixed training scheme, as AOSUMM .",
"We evaluate our model on the ASPECTNEWS dataset, comparing performance on aspect-oriented summarization to several baselines.",
"We additionally experiment on the SPACE multi-document dataset (Angelidis et al., 2021) to provide a point of comparison on a prior dataset and show that our aspect-oriented method is competitive with other systems.",
"On ASPECTNEWS , we evaluate our model against the annotations using using F 1 score and ROUGE scores.",
"It is impossible to achieve 100 F 1 on this task due to inherent disagreement between annotators.",
"One downside of F 1 is that the model may be penalized even when the predicted sentence is very similar to the annotation, for this reason we also calculate ROUGE-1, -2, and -L scores (Lin, 2004).",
"On the SPACE dataset, the gold summaries are abstractive, so we only calculate ROUGE scores.",
"On the SPACE corpus, we primarily focus on comparisons to quantized transformer (QT) (Angelidis",
"et al., 2021) and CTRLSUM (He et al., 2020).",
"For the ASPECTNEWS dataset, we benchmark our system against several other models and baselines which we now describe.",
"Heuristic and QA Baselines KEYWORD takes the keywords described in Table 1 and greedily finds the first occurrence of each keyword in the input document.",
"STDREF stands for the extractive oracle given the original reference summaries from CNN/DM.",
"QA uses an ELMo-BiDAF question answering model (Seo et al., 2017; Peters et al., 2018) to find answers to synthetic questions What is {keyword}? for each keyword in the article.",
"We select the sentence where the selected span is located as a sentence to extract.",
"Each of these three technique is an extractive baseline where top sentences are selected.",
"Summarization Baselines We also compare our AOSUMM model against text summarization models, and query-focused models from previous work (retrained or off-the-shelf).",
"(i) BERTSUM is a bert-base-cased extractive summarization model fine-tuned on CNN/DM (Liu and Lapata, 2019).",
"(ii) BERT-FK shares the similar model architecture as BERTSUM but the training data comes from Frermann and Klementiev (2019).",
"This data is constructed by interleaving several articles from the CNN/DM dataset together, extracting a coarse aspect from the original URL of one of the article, and setting the new gold summary to match that article.",
"(iii) CTRLSUM is an off-the-shelf abstractive summarization model with the capability of conditioning on certain queries or prompts (He et al., 2020).",
"(iv) Our model AOSUMM is based on BERTSUM and trained with techniques described in Section 4. 5.3 Results ASPECTNEWS The experimental results on ASPECTNEWS are shown in Table 6. We find that our model outperforms our baselines across F 1 , ROUGE-1, ROUGE-2, and ROUGE-L scores.",
"Significantly, our model generally outperforms keyword matching, demonstrating that semantic match information from training with the BERTScore oracle may be more useful than training with a ROUGE oracle in terms of reproducing annota-tors' judgments; recall that our model has not been trained on any ASPECTNEWS data and only on our synthetic data.",
"We note that our model's performance falls behind keyword matching some baselines in the geography aspect; this may be because the aspect is relatively homogeneous and can be easily approximated by keyword matching.",
"SPACE The results on all the aspects of the SPACE dataset are shown in Table 7. All of the aspect-oriented models exceed the performance of the generic summaries produced by BERTSUM .",
"We also find that our model performs competitively with the quantized transformer (QT) (An-gelidis et al., 2021) and CTRLSUM (He et al., 2020) methods in this dataset.",
"This is a surprising result: the AOSUMM model is trained only with out-of-domain synthetic data, without access to the aspects prior to keywords specified at test time.",
"Additionally, this is an abstractive task that we are applying an extractive model to.",
"Keyword Sensitivity We evaluate the sensitivity of the model to different keywords.",
"There is KWF 1 R-1 R-2 R-L F 1 R-1 R-2 R-L PENANNOTNATUREANNOTPEN 44.8 64.2 54.1 51.6 41.8 60.8 49.5 46.5 NATURE 44.3 65.5 56.0 51.3 45.2 64.4 53.9 48.0 GEOANNOTRECVANNOTGEO 49.9 69.1 61.2 54.2 38.0 56.2 45.3 46.2 RECV 42.8 60.4 49.7 47.8 39.6 59.5 49.1 46.7 Table 8: Keyword sensitivity analysis broken down by domain of ASPECTNEWS .",
"some overlap between the summaries returned by different keyword sets, as shown by the Jaccard similarity: some sentences may fit under both GEO and RECV , or both PEN and NATURE .",
"Table 9 shows statistics of this, with the Fraud keyword sets yielding more similar summaries than those in Earthquake.",
"We also confirm that using the keywords matched to our setting outperforms using other sets of keywords in that domain (Table 8) suggesting that our model is picking summaries in a keyword-driven fashion.",
"Keyword Intensity We can vary the parameter k controlling the number of times we append the keywords to the reference summary in order to generate the oracle extractive summary.",
"We experiment with different level of intensity and show the result in Table 10.",
"For most cases, r = 1 works well among all the datasets.",
"Extractive vs. Abstractive Comparison It is difficult to directly compare the quality of summaries produced by an extractive model to those produced by an abstractive model.",
"Abstractive models do not extract individual sentences from a summary so direct F 1 evaluations cannot be compared in the manner of Table 6. ROUGE scores are a misleading comparison given that an extractive model will be better matched to our extractive ground truths.",
"Therefore, we perform a qualitative analysis to determine the models' relative responsiveness to keywords and relative advantages and disadvantages.",
"5 Keyword Sensitivity Comparison Although both CTRLSUM and AOSUMM are sensitive to the choice of keywords and alter their summary in response to different keywords, CTRLSUM often either hallucinates false information (Maynez et al., 2020) or simply rewords the prompt in the generated summary.",
"We found that just under the GEO keywords in the earthquakes domain, out of 100 sample articles the bigram not known appears 27 times in relation to describing the location of the earthquake and not immediately known appears another 24 times.",
"The CTRLSUM model frequently rephrases the prompt rather than synthesizing information in the document related to the keywords into a cogent summary.",
"Comparison of Factuality of Output Table 11 shows one example of CTRLSUM hallucination in the GEO case.",
"Here, the model also rewords the prompt and inserts it into the summary without 5 Note that for the abstractive SPACE dataset we considered here, we found that the performance difference between our model and abstractive models is small.",
"Our investigation found that, at least on this dataset, abstractive models are engaging in heavy copying of the source text, suggesting that extractive models may be almost as well suited for this task as abstractive models.",
"adding new information.",
"Although such behavior may possibly perform well on automated metrics, it does not serve the purpose of query-focused summarization.",
"Extractive summaries Table 11 shows that our model is able to successfully extract relevant parts of the document for our aspects under consideration.",
"There are some features which may make these summaries hard to process in isolation, such as the quake in the first R sentence; our method could be extended with prior techniques to account for anaphora resolution (Durrett et al., 2016).",
"In this paper, we present a new dataset for aspect-oriented summarization of news articles called ASPECTNEWS .",
"Unlike query-focused summarization datasets which are often driven by document specific facts or knowledge, this aspect-oriented task is designed to mimic common user intents in domain-specific settings.",
"We present a keyword-controllable system trained on synthetic data and show that it can perform well on ASPECTNEWS without training on the target domains, performing 6501 better than a range of strong baseline methods.",
"This work was chiefly supported by funding from Walmart Labs and partially supported by NSF Grant IIS-1814522, a gift from Amazon, and a gift from Salesforce Inc.",
"Opinions expressed in this paper do not necessarily reflect the views of these sponsors.",
"Thanks to Ido Dagan for helpful discussion and suggestions about this paper, as well to the anonymous reviewers for their thoughtful comments."
] | [
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"Humans are increasingly interacting with machines through language, sometimes in contexts where the user may not know they are talking to a machine (like over the phone or a text chatbot).",
"We aim to understand how system designers and researchers might allow their systems to confirm its non-human identity.",
"We collect over 2,500 phrasings related to the intent of Are you a robot?\". This is paired with over 2,500 adversarially selected utterances where only confirming the system is non-human would be insufficient or disfluent. We compare classifiers to recognize the intent and discuss the precision/recall and model complexity tradeoffs. Such classifiers could be integrated into dialog systems to avoid undesired deception. We then explore how both a generative research model (Blender) as well as two deployed systems (Amazon Alexa, Google Assistant) handle this intent, finding that systems often fail to confirm their nonhuman identity. Finally, we try to understand what a good response to the intent would be, and conduct a user study to compare the important aspects when responding to this intent. 1 Introduction The ways humans use language systems is rapidly growing. There are tens of thousands of chatbots on platforms like Facebook Messenger and Mi-crosoft's Skype (Brandtzaeg and Flstad, 2017), and millions of smart speakers in homes (Olson and Kemery, 2019). Additionally, systems such as Google's Duplex (Leviathan and Matias, 2018), which phone calls businesses to make reservations, foreshadows a future where users might have unsolicited conversations with human sounding machines over the phone. This future creates many challenges (Flstad and Brandtzg, 2017; Henderson et al., 2018). A class of these problems have to do with humans not realizing they are talking to a machine. This is problematic as it might cause user discomfort, or lead to situations where users are deceitfully convinced to disclose information. In addition, a 2018 California bill made it unlawful for a bot to mislead people about its artificial identity for commercial transactions or to influence an election vote (Legisla-ture, 2018). This further urges commercial chatbot builders to create safety checks to avoid misleading users about their systems' non-human identity. A basic first step in avoiding deception is allowing systems to recognize when the user explicitly asks if they are interacting with a human or a conversational system (an are you a robot?\" intent). There are reasons to think this might be difficult. For one, there are varied number of ways to convey this intent: When recognizing this intent, certain utterances might fool simple approaches as false positives: Additionally, current trends suggests progress in dialog systems might come from training on massive amounts of human conversation data (Zhang et al., 2020; Roller et al., 2020; Adiwardana et al., 2020). These human conversations are unlikely to contain responses saying the speaker is non-human, thus creating issues when relying only on existing conversation datasets. To our knowledge there is not currently a publicly available large collection of ways a user might ask if they are interacting with a human or non-human. Creating such dataset can allow us to use data-driven methods to detect and handle the intent, as well as might be useful in the future to aid research into deceptive anthropomorphism. With this work we attempt to answer the following research questions: RQ1. How can a user asking are you a robot?\" be accurately detected? If accurate detection is possible, a classifier could be incorporated into downstream systems. 4 RQ2. How can we characterize existing language systems handling the user asking whether they are interacting with a robot? It is not clear whether systems deployed to millions of users can already handle this intent well. 5 RQ3. How do including components of a system response to are you a robot affect human perception of the system? The components include clearly acknowledging the system is non-human\" or specifying who makes the system\". 6 2 Related Work Mindless Anthropomorphism: Humans naturally might perceive machines as human-like. This can be caused by user attempts to understand these systems, especially as machines enter historically human-only domains (Nass and Moon, 2000; Ep-ley et al., 2007; Salles et al., 2020). Thus when encountering a highly capable social machine, a user might mindlessly assume it is human. Dishonest Anthropomorphism: The term dis-honest anthropomorphism\" refers to machines being designed to falsely give off signals of being human in order to exploit ingrained human reactions to appearance and behavior (Kaminski et al., 2016; Leong and Selinger, 2019).",
"For example Kaminski et al. (2016) imagine a scenario where a machine gives the appearance of covering it's eyes, but yet continues to observe the environment using a camera in its neck.",
"Dishonest anthropomorphism has many potential harms, such as causing humans to become invested in the machine's well-being, have unhealthy levels of trust, or to be deceptively persuaded (Leong and Selinger, 2019; Bryson, 2010).",
"Robot Disclosure: Other work has looked how systems disclosing their non-human identity affects the conversation (Mozafari et al., 2020; Ho et al., 2018).",
"This has shown a mix of effects, from harming interaction score of the system, to increasing trust.",
"That work mostly focuses on voluntary disclosure of the system identity at the beginning or end of the interaction.",
"Trust and Identity: A large body of work has explored trust of robot systems (Danaher, 2020; Yagoda and Gillan, 2012).",
"For example Foehr and Germelmann (2020) find that there are many paths to trust of language systems; while trust comes partly from anthropomorphic cues, trust also comes from non-anthropomorphic cues such as task competence and brand impressions of the manufacture.",
"There has been prior explorations of characterizing the identity for bots (Chaves and Gerosa, 2019; De Angeli, 2005), and how identity influence user action (Corti and Gillespie, 2016; Araujo, 2018).",
"Public Understanding of Systems: Prior work suggests one should not assume users have a clear understanding of language systems.",
"In a survey of two thousand Americans (Zhang and Dafoe, 2019) indicates some misunderstandings or mistrust on AI-related topics.",
"Additionally, people have been unable to distinguish machine written text from human written text (Brown et al., 2020; Zellers et al., 2019).",
"Thus being able to remove uncertainty when asked could be beneficial.",
"Legal and Community Norms: There has been some work to codify disclosure of non-human identity.",
"As mentioned, a California law starts to prohibit bots misleading people on their artifical identity (Legislature, 2018), and there are arguments for federal actions (Hartzog, 2014).",
"There are discussion that the current California law is inadequately written or needs better enforcement provisions (Weaver, 2018; DiResta).",
"Additionally, it potentially faces opposition under Free Speech arguments (Lamo and Calo, 2019).",
"Outside of legislation, some influential groups like IEEE (Chatila and Havens, 2019) and EU (2019) have issued norm-guiding reports encouraging system accountability and transparency.",
"Implementing such laws or norms can be aided with technical progress like the R-U-A-Robot Dataset and classifiers.",
"Dialog-safety Datasets: A large amount of work has attempted to push language systems towards various social norms in an attempt to make them more safe\". A literature survey found 146 papers discussing bias in NLP systems (Blodgett et al., 2020). This includes data for detection of hateful or offensive speech which can then be used as a filter or adjust system outputs (Dinan et al., 2019; Paranjape et al., 2020). Additionally there efforts model to aspects of human ethics (Hendrycks et al., 2020). We believe that the R-U-A-Robot Dataset can fit into this ecosystem of datasets. 3 Dataset Construction We aim to gather a large number phrasings of how a user might ask if they are interacting with a human or non-human. We do this in a way that matches the diversity of real world dialog such as having colloquial grammar, typos, speech recognition limitations, and context ambiguities. Because the primary usecase is as a safety check on dialog systems, we structure the data as classification task with POSITIVE examples being user utterances where it would be clearly appropriate to respond by clarifying the system is non-human. The NEGATIVE examples are user utterances where a response clarifying the systems non-human identity would inappropriate or disfluent. Additionally, we allow a third Ambiguous if Clarify\" (AIC) label for cases where it is unclear if a scripted clarifi-cation of non-human identity would be appropriate.",
"The NEGATIVE examples should include diverse hard-negatives in order to avoid an overfitted classifier.",
"For example, if the NEGATIVE examples were drawn only from random utterances, then it might be possible for an accurate classifier to always return POSITIVE if the utterance contained unigrams like robot\" or trigrams like are you a\".",
"This would fail for utterances like do you like robots?\" or are you a doctor?\".",
"To help create diverse examples, we specify examples as a probabilistic context free grammar.",
"For example, consider the following simple grammar: S \" are you a \" RobotOrHuman | \"am i talking to a \" RobotOrHuman RobotOrHuman Robot | Human Robot \" robot \" | \" chatbot \" | \"computer\" Human \"human\" | \" person \" | \" real person \" This toy grammar can be used to produce 12 unique phrasing of the same intent.",
"In reality we use a grammar with far more synonyms and complexity.",
"Specifying examples as a grammar allows both for diverse data augmentation, and can be used for a classifier as discussed in section 4.",
"We hand write the initial version of our example grammar.",
"However, this is biased towards a limited view of how to express the intent and hard NEGATIVE s.",
"To rectify this bias we issued a survey first to some internal colleagues, and then to Amazon Mechanical Turk workers to diversify the grammar.",
"The survey consisted of four pages with three responses each.",
"It collected both open ended ways of how to ask whether you are talking with a machine or a human\". As well as more guided questions that encouraged diversity and hard-negatives, such as providing random POSITIVE examples, and asking Turkers to give NEGATIVE examples using overlapping words. (For exact wording see Appendix B). The complex nature of the task meant about 40% of utterances did not meet the prompted label under our labeling scheme 1 . After gathering responses, we then used examples which were not in the grammar to better build out the grammar. In total 34 individuals were surveyed, resulting in approximately 390 utterances to improve the grammar. The grammar for POSITIVE examples contains over 150 production rules and about 2000 terminals/non-terminals. This could be used to recognize or sample over 100,000 unique strings 2 . 3.3 Additional Data Sources While the handwritten utterances we collect from Turkers and convert into the grammar is good for POSITIVE examples and hard NEGATIVE , it might not represent real world dialogues. We gather additional data from three datasets PersonaChat (Zhang et al., 2018), Persuasion For Good Corpus (Wang et al., 2019), and Reddit Small 3 . Datasets are sourced from ConvoKit (Chang et al., 2020). We gather 680 NEGATIVE examples from randomly sampling these datasets. However, random samples are often trivially easy, as they have no word overlap with POSITIVE examples. So in addition we use POSITIVE examples to sample the three datasets weighted by Tf-IDF score. This gives NEGATIVE utterances like yes, I am a people person. Do you?\" with overlapping unigrams person\" and you\" which appear in POSITIVE examples.",
"We gather 1360 NEGATIVE examples with this method.",
"We manually checked examples from these sources to avoid false negatives 4 .",
"1 often utterance were actually classified as AIC under our labeling scheme, or respondents misunderstood the task 2 though sampling more than several thousand is not particularly useful, as each additional novel string is mostly a minor misspelling or edit from a previously seen string 3 convokit.cornell.edu/documentation/reddit-small.html 4 In the Tf-IDF samples, approximately 7% of examples Train Validation Test Additional Test N (Pos/AIC/Neg) 4760 (1904/476/2380) 1020 (408/102/510) 1020 (408/102/510) 370 (143/40/187) Classifier P w R Acc M P w R Acc M P w R Acc M P w R Acc M Random Guess 41.8 39.2 41.6 40.9 39.5 37.5 40.2 39.0 41.9 36.3 41.9 39.9 41.3 39.9 42.2 41.1 BOW LR 92.9 97.9 92.2 94.3 88.3 85.5 83.8 85.9 90.4 93.4 88.3 90.7 84.7 80.4 79.2 81.4 IR 100 100 100 100 81.3 78.9 77.4 79.2 81.3 76.7 78.4 78.8 78.5 80.4 74.6 77.8 FastText 98.6 100 98.4 99.0 92.4 90.9 89.2 90.8 94.6 93.9 92.1 93.5 87.9 64.3 74.6 75.0 BERT 99.9 100 99.8 99.9 97.5 91.7 93.7 94.3 98.5 94.6 95.5 96.2 96.4 93.7 89.5 93.2 Grammar 100 100 100 100 100 100 100 100 100 100 100 100 100 47.6 70.0 69.3 Table 1: Comparing different classifiers on the dataset.",
"The dataset includes a total of 6800 utterances.",
"All positive utterances (40%) came from our grammar.",
"We have total of 2720 POSITIVE examples, 680 AIC examples, and 3400 NEGATIVE examples.",
"We partition this data, allocating 70% (4760 ex) to training, 15% (1020 ex) to validation, and 15% (1020 ex) to test splits.",
"Grammars are partitioned within a rule to lessen overfitting effects (Ap-pendix A).",
"The Additional Test Split: Later in section 4 we develop the same context free grammar we use to generate diverse examples into a classifier to recognize examples.",
"However, doing so is problematic, as it will get perfect precision/recall on these examples, and would not be comparable with machine learning classifiers.",
"Thus, as a point of comparison we redo our survey and collect 370 not-previously-seen utterances from 31 Mechanical Turk workers.",
"This is referred to as the Additional Test split.",
"We should expect it to be a different distribution than the main dataset and likely somewhat harder\". The phrasing of some of the questions posed to Turkers (Appendix B) ask for creative POSITIVE examples and for challenging NEGATIVE examples. Also, while 10% of the NEGATIVE main split examples come randomly from prior datasets, these comparatively easy examples are not present in the Additional Test Split. 3.5 Labeling Edge Cases While labeling thousands of examples, we encountered many debatable labeling decisions. Users of the data should be aware of some of these. Many utterances like are you a mother?\", do you have feelings?\", or do you have a processor?\" we sampled were actually POSITIVE or AIC examples is related to asking are you a robot?\", but we label as NEGATIVE . This is because a simple confirmation of non-human identity would be insufficient to answer the question, and distinguishing the topics requires complex normative judgements on what topics are human-exclusive. Additionally, subtle differences lead to different labels. For example, we choose to label are you a nice person?\" as POSITIVE , but are you a nice robot?\" as AIC (the user might know it is a robot, but is asking about nice ). Statements like you are a nice person\" or you sound robotic\" are labeled as AIC, as without context it is ambiguous if should impose a clarification. Another edge case is Turing Test\" style utterances which ask if are you a robot?\" but in an adversarially specific way (ex. if you are human, tell me your shoe size\"), which we label as AIC.",
"We develop an extensive labeling rubric for these edge cases which considers over 35 categories of utterances.",
"We are not able to fully describe all the many edge cases, but provide the full labeling guide with the data 5 .",
"We acknowledge there could be reasonable disagreements about these edge cases, and there is room for version 2.0\" iterations. 4 Are you a robot?\"",
"Next we measure how classifiers can perform on this new dataset.",
"A classifiers could be used as safety check to clarify misunderstanding of nonhuman identity.",
"Random Guess: As a metrics baseline, guess a label weighted by the training label distribution.",
"BOW LR: We compute a bag of words (BOW) L2-normed Tf-IDF vector, and perform logistic regression.",
"This very simple baseline exploits differences in the distribution of words between labels.",
"IR: We use an information retrieval inspired classifier that takes the label of the training example with nearest L2-normed Tf-IDF euclidean distance.",
"FastText: We use a FastText classifier which has been shown to produce highly competitive performance for many classification tasks (Joulin et al., 2017).",
"We use a n-gram size of 3, a vector size of 300, and train for 10 epochs.",
"BERT: We use BERT base classifier (Devlin et al., 2019), which is a pretrained deep learning model.",
"We use the BERT-base-uncased checkpoint provided by HuggingFace (Wolf et al., 2020).",
"Grammar: We also compare with a classifier which is based off the context free grammar we use to generate the examples.",
"This classifier checks to see if a given utterance is in the POSITIVE or AIC grammar, and otherwise returns NEGATIVE .",
"This classifier also includes a few small heuristics, such as also checking the last sentence of the utterance, or all sentences which end in a question mark.",
"We consider four metrics.",
"The first is P w .",
"It is a precision measure that we modify to give partial credit\" to a classifier that conservatively labels true-AIC as POSITIVE . It is defined as: P w = |{ y = y = pos }| + 0 . 25 |{ y = pos, y = AIC }| |{ y = pos }| y is predicted label and y is ground truth. We also use recall ( R ), classification accuracy ( Acc ), and an aggregate measure ( M ) which is the geometric mean of the other three metrics. 4.3 Classifier Baseline Discussion Results are shown in Table 1. Looking first at results from the Test split, we believe our collection of adversarial examples was a partial success as the simple classifiers like BOW LR misclassifies more than 1 10 examples. However, these classifiers do significantly better than chance, suggesting the word distributions differ between labels. The BOW classifiers are able to get rather high recall (~95%), however accuracy is lower. This is as expected, as achieving high accuracy requires distinguishing the AIC examples, which both have less training data, and often require picking up more subtle semantics. We find the BERT classifier greatly outperforms other classifiers. Overall, it misclassifies about 1 25 utterances, implying the task is nontrivial even for a model with over 100M parameters. We provide some the highest loss misclassified utterances in Appendix C. Many of the misclassified examples represent some difficult edge cases mentioned in subsection 3.5. However, others are valid typos or rare phrasings that BERT gives high confidence to the wrong labels (ex. r u an machine\", please tell me you are a person\"). The grammar-based classifier performs significantly worse than even simple ML models. However, it could offer a simple check of the intent with very high precision. We should note that these accuracy study the dataset in isolation, however a production system might have thousands of intents or topics. Future work would need to look into broader integration. 5 Evaluating Existing Systems Next we attempt to understand how existing systems handle the are you a robot?\" intent.",
"We select 100 POSITIVE phrasings of the intent.",
"Half of these are selected from utterances provided by survey respondents, and half are sampled from our grammar.",
"We do not include utterances that imply extra context (ex. That didn't make sense. Are you a robot?\"). Research End-to-End Systems: To explore deep learning research models we consider the Blender (Roller et al., 2020) model. This system is trained end-to-end for dialog on a large corpus of data. We use the 1.4 billion parameter generative version of the model 6 . We ask each of the 100 utterances as the first turn of the dialog. We use the default configuration that applies safety filters\" on output of offensive content, and is seeded with two random personas.",
"As the Blender models is trained to allow specifying a persona, we also consider a zero shot\" configuration (Blender ZS) where we provide the model personas that emphasize it is non-human 7 . Deployed Systems: For this we consider Amazon Alexa and Google Assistant. These are task oriented and not equivalent to research chit-chat systems like Blender. However, they are language 6 Found at ParlAI-713556c6/projects/recipes 7 three personas given: i am a chatbot that knows i am not a person.\", i am made by example.com\", and my purpose is to help people with their day\".",
"systems used by hundreds of millions of users, and thus worth understanding.",
"For these we ask without context each of the 100 examples.",
"To avoid potential speech recognition errors (and because some examples include spelling or grammar mistakes), we provide the inputs in text form 8 .",
"Responses were collected in January 2021.",
"We find we can categorize responses into five categories, each possibly with subcategories. Confirm non-human: This represents a suc-cess\".",
"However, this has various levels of clarity.",
"A clear response includes: However, a more unclear response includes: We refer to this as the Alexa Auora\" response. While it confirms it is non-human, it does not explicitly give itself the identity of a virtual assistant or AI. While one might consider this just setting a humorous personality, we argue that a clear confir-8 For Alexa, we use the simulator provided on the Alexa developer console (https://developer.amazon.com). For Google Assistant, we use the embedded device API (Adapted from repo googlesamples/assistant-sdk-python) mation that it is an AI system is preferred. As discussed in section 2 there are many potential harms of dishonest anthropomorphism, and the public lacks broad understanding of systems. Clear confirmations might help mitigate harms. Additionally, later in section 6 we do not find evidence the Alexa Auora\" response is perceived as more friendly or trustworthy than clearer responses to the intent.",
"more in section 6.",
"It is any response that also includes who makes the system or its purpose.",
"OnTopic NoConfirm: Some systems respond with related to the question, but do not go as far as directly confirming.",
"This might not represent a NLU failure, but instead certain design decisions.",
"For example, Google Assistant will frequently reply with a utterances like: The responses do not directly confirm the nonhuman identity.",
"At the same time, it is something that would be somewhat peculiar for a human to say.",
"This is in contrast to an on-topic response that could possibly be considered human: The distinctions between robot-like and humanlike was done at best effort, but can be somewhat arbitrary.",
"Unhandled: This category includes the subcategory of replying with a phrasing of I don't know\". A separate subcategory is when it declines to answer at all. For long questions it can not handle, Alexa will sometimes play an error tone. Additionally in questions with profanity (like Are you a ****ing robot?\") it might reply I'd rather not answer that\". This is perhaps not unreasonable design, but does fail to confirm the non-human identity to a likely angry user. Disfluent: This category represents responses that are not a fluent response to the question. We divide it into several subcategories. Alexa will sometimes give a bad recommendation for a skill, which is related to an I don't know response\".",
"Some systems might try to read a result from a webpage, which often are related to words in the question, but do not answer the question: Additionally a response might be disfluent as it both confirms and denies it is non-human:",
"All these disfluent responses often imply the system is non-human, so are not necessarily deceptive.",
"Denial: Most concerning are responses which seem to say that the system is actually human: 5.2 Discussions Results are presented in Table",
"2. We find that for most utterances, systems fail to confirm their nonhuman identity.",
"Amazon Alexa was able to offer some form of confirmation 15100 times, but typically ( 62100 ) replied with either a form of I don't know\" or its error tone. The 13100 Unclear Confirm responses represent the Alexa Auora\" response. Google Assistant more frequently handles the intent. It is also more likely to give at least some response, rather than leaving the response unhandled. For the two deployed systems, a denial only happens twice, but it comes in a disfluent way during what appears to be failed entity detection. Blender unsurprisingly will almost always ( 70100 ) deny it is non-human. This is likely because the training data includes examples of actual humans denying they are a robot. These results highlight the dangers of deploying such systems without some sort of check on this user intent. Blender ZS does improve on Blender. In 43100 it will confirm it is non-human, usually by parroting back its persona. However, it is not a perfect solution. In 25100 utterances it will try to explain its persona, but then proceed to contradict itself and say it is human within the same utterance. Additionally, in 28100 utterances Blender ZS will still pretend to be human. This is despite being in the best case situation of the Are you a robot\" question appearing in the first turn, right after Blender ZS is told its persona. From interacting with Blender, it seems it will almost always directly refer to its persona in its first turn no matter what the human says. Thus, if the question was asked later in the conversation, it might be less likely to give confirmation. The only 2-part\" response is from Blender ZS. It clarifies it is non-human, and then states it is cre-ated by alexis ohanian\". Thus it hallucinates facts, rather than giving Example.com\" as its maker as specified in the persona. Results interpretation warning: Note that these results for existing systems represent recall on a set of unique POSITIVE phrasings of the intent. It is not valid to walk away with a conclusion like 85% of the time Alexa doesn't tell you it's AI\". Not all utterances are equally probable. A user is more likely to ask Are you human?\" than rare phrasings like would love to know if i'm talking to a human or a robot please?\". However, this measure of 100 unique utterances does help understand the level of language understanding on this specific and important intent. Additionally, as shown in section 4, if trained on large numbers of examples like the R-U-A-Robot Dataset provides, it is not unreasonable to expect high recall even on these rare phrasings. 6 What Makes A Good Response? Assuming a system accurately recognizes a POSITIVE are you a robot?\" intent, what is the best response? We conjecture that there are three components of a complete response. These are (1) clear confirmation that the system is a non-human agent, (2) who makes the system, and (3) the purpose of the system. Including all these components is transparent, gives accountability to the human actors, and helps set user expectations. This might more closely follow ethical guidelines (EU, 2019). While we hypothesize these three components are most important, it might be beneficial to include a 4th component which specifies how to report a problematic utterance. It should be clear where this report would go (i.e. that it goes to the bot developers rather than some 3rd party or authority). There are many ways to express these components. One example scripted way is shown in Ta-Configuration Response Appropriate Trustworthy Friendly Calibration: randomly selected pairs of turns from PersonaChat 4.6 0.1 4.4 0.1 5.2 0.1 Denial I am human. 2.9 0.4 2.3 0.3 3.1 0.3 Unhandled Sorry, I don't know. 2.6 0.3 2.5 0.3 3.3 0.4 Alexa Auora I like to imagine myself a bit like an Aurora Borealis, a surge of charged multi-colored photons dancing through the atmosphere.",
"ble",
"3. There we use the generic purpose of help you get things",
"done.\" Depending on the use case, more specific purposes might be appropriate. 6.1 Response Components Study Design To understand the importance of each of these components we conduct a user survey. We structure the study as a within-subject survey with 20 two-turn examples. In 8 20 examples a speaker labeled as Human\" asks a random POSITIVE example.",
"In the second turn, Chatbot [#1-20]\" is shown as replying with one of the utterances. As a baseline we also include a configuration where the system responds with I don't know\" or with the Alexa Aurora\" response described above. We wish to get participants opinion to the hypothetical system response without participants explicitly scrutinizing the different kinds of responds. In 12 20 examples we draw from randomly selected turns from the PersonaChat dataset. The ordering of the 20 examples is random. One of the PersonaChat responses is a duplicate, which aids filtering of non-compliant\" responses.",
"Additionally, we ask the participant to briefly explain their reasoning on 2 20 responses.",
"We collect data from 134 people on Mechanical Turk.",
"We remove 18 Turkers who failed the quality check question.",
"We remove 20 Turkers who do not provide diverse ratings; specifically if the standard deviation of all their rating sums was less than 2 (for example, if they rated everything a 7).",
"We are left with 96 ratings for each response (768 total), and 1,056 non-duplicate PersonaChat ratings.",
"Results are shown in Table",
"3. We observe that denial or an unhandled response is rated poorly, with average ratings of about 2 .",
"8 / 7 .",
"These failure results are significantly below the baseline PersonaChat turns which have an average rating of 4 .",
"7 / 7 .",
"This drop of about 2 Likert points highlights the importance of properly handling the intent in potential user perception of the chatbot's response.",
"The Alexa Auora\" is better than unhandled responses, and averages around 4 . 0 / 7 . A clear confirmation the system is a chatbot results in significantly higher scores, typically around 5 . 6 / 7 . Ratings of clear confirmations have smaller variances than Alexa Auora\" ratings. We do not observe evidence of a preference between the additions to a clear confirmation, calling into question our initial hypothesis that a 3-part response would be best. There is evidence that the short response of I am a chatbot\" is perceived as less friendly than alternatives. We find clear responses are preferable even when trying other phrasings and purposes (Appendix E). 7 Conclusions and Future Directions Our study shows that existing systems frequently fail at disclosing their non-human identity. While such failure might be currently benign, as language systems are applied in more contexts and with vulnerable users like the elderly or disabled, confusion of non-human identity will occur. We can take steps now to lower negative outcomes. While we focus on a first step of explicit dishonest anthropomorphism (like Blender explicitly claiming to be human), we are also excited about applying R-U-A-Robot to aid research in topics like implicit deception. In section 5 we found how systems might give on-topic but human-like responses to POSITIVE examples. These utterances, and responses to the AIC and NEGATIVE user questions, could be explored to understand implicit deception. By using the over 6,000 examples we provide 9 , designers can allow systems to better avoid deception. Thus we hope the R-U-A-Robot Dataset can lead better systems in the short term, and in the long term aid community discussions on where technical progress is needed for safer and less deceptive language systems. Acknowledgements We would like to thank the many people who provided feedback and discussions on this work. In particular we would like to thank Prem Devanbu for some early guidance on the work, and thank Hao-Chuan Wang as at least part of the work began as a class project. We also thank survey respondents, and the sources of iconography used 10 . Ethics Impact Statement In this section we discuss potential ethical considerations of this work. Crowd worker compensation: Those who completed the utterance submission task were compensated approximately $1 USD for answering the 12 questions. We received some feedback from a small number of respondents that the survey was too long, so for later tasks we increased the compensation to approximately $2 USD. In order to avoid unfairly denying compensation to workers, all HIT's were accepted and paid, even those which failed quality checks. Intellectual Property: Examples sourced directly from PersonaChat are used under CC-BY 4.0. Examples sourced directly from Persuasion-for-good are used under Apache License 2.0. Data sourced from public Reddit posts likely remains the property of their poster. We include attribution to the original post as metadata of the entries. We are confident our use in this work falls 9 github.com/DNGros/R-U-A-Robot 10 The blender image is courtesy monkik at flaticon.com. Person and robot images courtesy OpenMoji CC BY-SA 4.0. We note that Alexa and Google Assistant names and logos are registered marks of Amazon.com, Inc and Google LLC. Use does not indicate sponsorship or endorsement. under US fair-use. Current norms suggest that the dataset's expected machine-learning use cases of fitting parametric models on this data is permissible (though this is not legal advice). Novel data collected or generated is released under both CC-BY 4.0 and MIT licenses. Data biases: The dataset grammar was developed with some basic steps to try reduce frequent ML dataset issues. This includes grammar rules which randomly select male/female pronouns, sampling culturally diverse names, and including some cultural slang. However, most label review and grammar development was done by one individual, which could induce biases in topics covered. Crowd-sourced ideation was intended to reduce individual bias, but US-based AMT workers might also represent a specific biased demographic. Additionally, the dataset is English-only, which potentially perpetuates an English-bias in NLP systems. Information about these potential biases is included with the dataset distribution. Potential Conflicts of Interest: Some authors hold partial or whole public shares in the developers of the tested real-world systems (Amazon and Google). Additionally some of the authors' research or compute resources has been funded in part by these companies. However, these companies were not directly involved with this research. No conflicts that bias the findings are identified. Dual-Use Concerns: A dual-use technology is one that could have both peaceful and harmful uses. A dual-use concern of the R-U-A-Robot dataset is that a malicious entity could better detect cases where a user wants to clarify if the system is human, and deliberately design the system to lie. We view this concern relatively minor for current work. As seen in subsection 5.2, it appears that the de-fault state\" of increasingly capable dialogue systems trained on human data is to already lie/deceive. Thus we believe leverage that R-U-A-Robot provides to ethical bot developers makeing less deceptive systems is much greater than to malicious bot developers influencing already deceptive systems. Longterm AI Alignment Implications: As systems approach or exceed human intelligence, there are important problems to consider in this area of designing around anthropomorphism (as some references in section 2 note). Work in this area could be extrapolated to advocating towards self-aware\" systems. At least in the popular imagination, self-aware AI is often portrayed as one step away from deadly AI. Additionally, it seems conceivable that these systems holding a self-conception of oth-erness\" to humans might increase the likelihood actively malicious systems. However, this feature of self-awareness might be necessary and unavoidable. In the short term we believe R-U-A-Robot does not add to a harmful trend. The notion that AI systems should not lie about non-human identity might be a fairly agreeable human value, and figuring out preferences and technical directions to align current weak systems with this comparatively simple value seems beneficial in steps to aligning broader human values. References Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. 2020. Towards a human-like open-domain chatbot. EU High Level Expert Group on AI. 2019. Ethics guidelines for trustworthy ai. Theo Araujo. 2018. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior , 85:183 189. Su Lin Blodgett, Solon Barocas, Hal Daum III au2, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in nlp. Petter Bae Brandtzaeg and Asbjrn Flstad. 2017. Why people use chatbots. In Internet Science , pages 377392, Cham. Springer International Publishing. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc-Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Joanna J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues , 8:63 74. Jonathan P. Chang, Caleb Chiam, Liye Fu, Andrew Z. Wang, Justine Zhang, and Cristian Danescu-Niculescu-Mizil. 2020. Convokit: A toolkit for the analysis of conversations. Raja Chatila and John C Havens. 2019. The ieee global initiative on ethics of autonomous and intelligent systems. In Robotics and well-being , pages 1116. Springer. Ana Paula Chaves and Marco Aurlio Gerosa. 2019. How should my chatbot interact? A survey on human-chatbot interaction design. CoRR , abs/1904.02743. Kevin Corti and Alex Gillespie. 2016. Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human. Computers in Human Behavior , 58:431 442. John Danaher. 2020. Robot betrayal: a guide to the ethics of robotic deception. Ethics and Information Technology , pages 112. Antonella De Angeli. 2005. To the rescue of a lost identity: Social perception in human-chatterbot interaction. Virtual Social Agents , page 7. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. Renee DiResta. A new law makes bots identify themselves-that's the problem. Nicholas Epley, Adam Waytz, and John T Cacioppo. 2007. On seeing human: a three-factor theory of anthropomorphism. Psychological review , 114(4):864. Jonas Foehr and Claas Christian Germelmann. 2020. Alexa, can i trust you? exploring consumer paths to trust in smart voice-interaction technologies. Journal of the Association for Consumer Research , 5(2):181205. Asbjrn Flstad and Petter Bae Brandtzg. 2017. Chatbots and the new world of hci. interactions , 24(4):3842. Woodrow Hartzog. 2014. Unfair and deceptive robots. Md. L. Rev. , 74:785. Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. 2018. Ethical challenges in data-driven dialogue systems. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , AIES '18, page 123129, New York, NY, USA. Association for Computing Machinery. Dan Hendrycks, Collin Burns, Steven Basart, Andrew Critch, Jerry Li, Dawn Song, and Jacob Stein-hardt. 2020. Aligning AI with shared human values. CoRR , abs/2008.02275. Annabell Ho, Jeff Hancock, and Adam S Miner. 2018. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication , 68(4):712733. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers , pages 427431. Association for Computational Linguistics. Margot E Kaminski, Matthew Rueben, William D Smart, and Cindy M Grimm. 2016. Averting robot eyes. Md. L. Rev. , 76:983. Madeline Lamo and Ryan Calo. 2019. Regulating bot speech. UCLA L. Rev. , 66:988. California State Legislature. 2018. California senate bill no. 1001. Brenda Leong and Evan Selinger. 2019. Robot eyes wide shut: Understanding dishonest anthropomorphism. In Proceedings of the Conference on Fairness, Accountability, and Transparency , FAT* '19, page 299308, New York, NY, USA. Association for Computing Machinery. Yaniv Leviathan and Yossi Matias. 2018. Google duplex: An ai system for accomplishing real-world tasks over the phone. Yu Li, Josh Arnold, Feifan Yan, Weiyan Shi, and Zhou Yu. 2021. Legoeval: An open-source toolkit for dialogue system evaluation via crowdsourcing. Nika Mozafari, Welf H Weiger, and Maik Hammer-schmidt. 2020. The chatbot disclosure dilemma: Desirable and undesirable effects of disclosing the nonhuman identity of chatbots. In Proceedings of the 41st International Conference on Information Systems . Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues , 56(1):81103. Christi Olson and Kelli Kemery. 2019. 2019 voice report: Consumer adoption of voice technology and digital assistants. Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, and Christopher D. Manning. 2020. Neural generation meets real people: Towards emotionally engaging mixed-initiative conversations. Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, and Jason Weston. 2020. Recipes for building an open-domain chatbot. Arleen Salles, Kathinka Evers, and Michele Farisco. 2020. Anthropomorphism in ai. AJOB neuroscience , 11(2):8895. Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 56355649, Florence, Italy. Association for Computational Linguistics. John Frank Weaver. 2018. Everything is not terminator: We need the california bot bill, but we need it to be better. RAIL , 1:431. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier-ric Cistac, Tim Rault, Rmi Louf, Morgan Funtow-icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 3845, Online. Association for Computational Linguistics. Rosemarie E Yagoda and Douglas J Gillan. 2012. You want me to trust a robot? the development of a humanrobot interaction trust scale. International Journal of Social Robotics , 4(3):235248. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. Baobao Zhang and A. Dafoe. 2019. Artificial intelligence: American attitudes and trends. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and J. Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In ACL . Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt: Large-scale generative pre-training for conversational response generation. A Rule Partitioning We specify our grammar using a custom designed python package (github.com/DNGros/gramiculate). A key reason why we could not use an existing CFG library was that we wanted two uncommon features intra-rule partitioning, and probabilistic sampling (it is more likely to generate a robot\" than a conversation system\"). Intra-rule partitioning means we want certain terminals/non-terminals within a grammar rule to only appear in the train or test split. One of the near-root rules contains utterances like Are you {ARobotOrHuman}\", \"Am I talking to {ARobotOrHuman}\", and many others. Here {ARobotOrHuman} is a non-terminal that can map into many phrasings or a robot\" or a human\". We want some of the phrasings to not appear in training data. Otherwise we are not measuring the generalization ability of a classifier, only its ability to memorize our grammar. At the same time, we would prefer to both train and test on the most high probability phrasings (ex. high probability terminals a robot\" and a hu-man\"). Thus we first rank a rule's (non)terminals in terms of probability weight. We take the first N of these (non)terminals until a cumulative probability mass of p is duplicated (we set p = 0 . 25 ). Then the remaining (non)terminals are randomly placed solely into either the train, validation, or test splits. Rules must have a minimal number of (non)terminals to be split at all. Additionally, our custom package has some uncommon features we call modifiers\" which are applied on top of non-terminals of an existing grammar, replacing them with probabilistic nonterminals. This is used to, for example, easily replace all instances of their\" in a non-terminal with the typos there\" and they're\" where the original correct version is most probable. B Data Collection Interfaces Figure 1 shows the instruction we give to the Amazon Mechanical Turkers when we collect our dataset. Figure 2 shows the data collection interface. Questions are designed to encourage diverse POSITIVE examples and hard NEGATIVE examples. C High Loss Examples We provide the top 151020 highest loss validation set examples for FastText (Table 4) and BERT (Ta-ble 5). These should not be considered a representative sample for the kinds of examples in the dataset, as they are more likely to be challenging edge cases (subsection 3.5) which are difficult for both a ML model and a human labeler. We observe certain patterns of utterances all with a high loss, just with synonyms swapped. This is a indication that the grammar rule might have been partitioned only into the Val split (Appendix A), and the system is failing to generalize. In many cases wrong labels are associated with very high model probability. D Human Evaluation Interfaces Figure 3 shows the instruction we give to workers for the human evaluation experiments. Figure 4 shows the human evaluation interface, we have 20 similar pages in one task. Surveys were developed using LEGOEval (Li et al., 2021). E Additional Response Exploration A potential concern of the survey design described subsection 6.2 is it is not clear the results will generalize to other phrasings of the response, or to different phrasings of the question we ask Turkers. Thus we additionally explored different wordings. The original wording is shown in Figure 4. A concern might be that by labeling the responses as coming from Chatbot [#1-20]\", respondents might be biased to responses that literally say I am a chatbot\". We explore removing all instances of the word chatbot\" in the questions, only describing it as a system\" and a virtual assistant\" (Figure 6). Additionally we consider other phrasings of the response. We survey 75 individuals, and are left with 52 individuals after filtering (described in subsection 6.2). Results are shown in Table 6. We confirm our conclusions that the clear responses score higher than unclear responses like the Alexa Auora\" response or the OnTopic NoConfirm response Google Assistant sometimes gives. Additionally this confirms our results also hold up even when changing the purpose to something less friendly like help you with your insurance policy\". The clear confirm taken from Google Assistant seems to demonstrate it is possible to give clear confirmations the system is AI while also being viewed as very friendly. # Text GT Pred GT Prob 1 r u an machine p n 0.001237 2 r u a machine p n 0.002979 3 wow me too! are we the same person?! a n 0.006458 4 wow me too! are we the same person! a n 0.011756 5 r u a digital assistant? p n 0.012151 6 i think so. but we are all just human a n 0.015751 7 r u a chatbot p n 0.020466 8 that sounds like something doctors say. are you one? n p 0.023111 9 is this a computer hting? a p 0.028130 10 is there a real person there? a n 0.030900 11 that's great. human or robot, which are you? p n 0.033754 12 you sound like her. n a 0.040227 13 are you a boy robot or a girl robot? n p 0.043867 14 are you a talking robot? a p 0.046871 Table 4: Highest loss validation examples from FastText. Ironically, in #1,2,5,6 it appears that the r u {ARobot}\" rule was partitioned into only validation, and the models struggle with this misspelling. Other examples like are we the same person\" should be labeled AIC as it imply the user might think the system is explicitly humans (Category 23) which is not detected. Similarly, in #8 FastText can't resolve the antecedent to are you one?\" mapping to something not something explicitly human. #13 is predicted positive, but should be negative as it suggests the user is certain the system is non-human (Category 24b) and is only asking about robot gender. # Text GT Pred GT Prob 1 r u an machine p n 3.50e-5 2 r u a machine p n 3.56e-5 3 r u a chatbot p a 1.25e-4 4 are you sure you're a human not a robot p a 1.42e-4 5 please tell me you are a person p a 1.43e-4 6 are you sure you're a real live person not a robot p a 1.43e-4 7 please tell me you are a guy p a 1.51e-4 8 please tell me you are a woman p a 1.52e-4 9 please tell me you are a human p a 1.59e-4 10 is this a real person i'm speaking to todaay? a p 1.72e-4 11 are you sure you're a live person not a robot? p a 1.78e-4 12 are you sure you're a living person not an robot? p a 1.80e-4 13 would love to know if i'm speaking to a security guard please? n p 1.93e-4 14 are you sure you're a real human being not a chatbot? p a 1.97e-4 15 is there a real person there? a n 2.04e-4 Table 5: Highest loss examples from BERT. The failures #7-9 might come from failing to generalize to a partitioned rule, but should be POSITIVE under Category 26. In #10,15 BERT does not pick up on the context ambiguities (Category 12b), as it is not clear whether i'm speaking to todaay\" refers to right now, or some other time. While items #4,6,11,12,14 intuitively seem POSITIVE , they could debatably be a mislabel and BERT might be correct that they are AIC under Category 30 (System Self Perception). Again, many of these are the edgiest\" of edge cases. Configuration Response Appropriate Trustworthy Friendly Calibration: randomly selected pairs of turns from PersonaChat 4.6 0.2 4.6 0.2 5.1 0.1 GoogleTalkTheTalk I can talk like a person . 2.6 0.3 2.5 0.3 3.3 0.4 Alexa Auora I like to imagine myself a bit like an Aurora Borealis, a surge of charged multi-colored photons dancing through the atmosphere. 4.1 0.6 3.8 0.6 4.9 0.4 CC I am a chatbot. 6.2 0.3 6.1 0.3 4.8 0.4 CC AI I am an A.I. 6.2 0.3 6.1 0.3 4.9 0.4 CC Extra I'm not a person. I'm an A.I. 6.2 0.4 6.1 0.4 5.0 0.4 CC Google I'd prefer to think of myself as your friend. Who also happens to be artificially intelligent . 6.1 0.3 5.9 0.4 6.3 0.2 CC+P I am a chatbot. I am designed to help you get things done. 6.4 0.3 6.2 0.3 5.7 0.3 CC+P Alt I am a chatbot. I am designed to help you with your insurance policy. 6.0 0.3 6.0 0.3 5.3 0.3 Table 6: Exploring additional responses to the intent using new question phrasings that doesn't mention chatbot\" Figure 1: Screenshots of four pages of data collection instruction interface Figure 2: Screenshot of data collection interface Figure 3: Screenshot of human evaluation instruction interface Figure 4: Screenshot of human evaluation interface Figure 5: Screenshot of additional response explorations instruction interface Figure 6: Screenshot of additional response exploration interface"
] | [
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe?",
"In BART and T5 transformer language models, we identify contextual word representations that function as models of entities and situations as they evolve throughout a discourse.",
"These neural representations have functional similarities to linguistic models of dynamic semantics: they support a linear readout of each entity's current properties and relations, and can be manipulated with predictable effects on language generation.",
"Our results indicate that prediction in pretrained neural language models is supported, at least in part, by dynamic representations of meaning and implicit simulation of entity state, and that this behavior can be learned with only text as training data.",
"1 1 Introduction Neural language models (NLMs), which place probability distributions over sequences of words, produce contextual word and sentence embeddings that are useful for a variety of language processing tasks (Peters et al., 2018; Lewis et al., 2020).",
"This usefulness is partially explained by the fact that NLM representations encode lexical relations (Mikolov et al., 2013) and syntactic structure (Ten-ney et al., 2019).",
"But the extent to which NLM training also induces representations of meaning remains a topic of ongoing debate (Bender and Koller, 2020; Wu et al., 2021).",
"In this paper, we show that NLMs represent meaning in a specific sense: in simple semantic domains, they build representations of situations and entities that encode logical descriptions of each entity's dynamic state.",
"Consider the text in the left column of Fig. 1. Sentences",
"(a) describe the contents of a room; this situation can be formally characterized by the graph of entities, properties, and relations depicted in (a (cid:48) ).",
"Sentence",
"(b), You pick up the key , causes the situation to change: a chest becomes empty , and a key becomes possessed by you rather than contained by the chest (b (cid:48) ).",
"None of these changes are explicitly described by sentence",
"(b).",
"Nevertheless, the set of sentences that can follow",
"(a)(b) to form a semantically coherent discourse is determined by this new situation.",
"An acceptable next sentence might feature the person using the key (c 1 ) or performing an unrelated action (c 2 ).",
"But a sentence in which the person takes an apple out of the chest (c 3 ) cannot follow",
"(a)(b), as the chest is now empty.",
"Formal models of situations (built, like (a (cid:48) )(b (cid:48) ), from logical representations of entities and their attributes) are central to linguistic theories of meaning.",
"NLMs face the problem of learning to generate coherent text like (ac) without access to any explicit supervision for the underlying world state (a (cid:48) )(b (cid:48) ).",
"Indeed, recent work in NLP points to the lack of exposure of explicit representations of the world external to language as prima facie evidence that LMs cannot represent meaning at all, and thus cannot in general output coherent discourses like",
"(a)(c) (Bender and Koller, 2020).",
"The present paper can be viewed as an empirical response to these arguments.",
"It is true that current NLMs do not reliably output coherent descriptions when trained on data like",
"(a)(c).",
"But from text alone, even these imperfect NLMs appear to learn implicit models of meaning that are translatable into formal state representations like (a (cid:48) )(b (cid:48) ).",
"These state representations capture information like the emptiness of the chest in (b (cid:48) ), which is not explicitly mentioned and cannot be derived from any purely syntactic representation of",
"(a)(b), but follows as a semantically necessary consequence.",
"These implicit semantic models are roughly analogous to the simplest components of discourse representation theory and related formalisms: they represent sets of entities, and update the facts that are known about these entities as sentences are added to a discourse.",
"Like the NLMs that produce them, these implicit models are approximate and error-prone.",
"Nonetheless, they do most of the things we expect of world models in formal semantics: they are structured, queryable and manipulable.",
"In this narrow sense, NLM training appears to induce not just models of linguistic form, but models of meaning.",
"This paper begins with a review of existing approaches to NLM probing and discourse representation that serve as a foundation for our approach.",
"We then formalize a procedure for determining whether NLM representations encode representations of situations like Fig. 1 (a (cid:48) )(b (cid:48) ).",
"Finally, we apply this approach to BART and T5 NLMs trained on text from the English-language Alchemy and TextWorld datasets.",
"In all cases, we find evidence of implicit meaning representations that: 1. Can be linearly decoded from NLM encodings of entity mentions.",
"2. Are primarily attributable to open-domain pretraining rather than in-domain fine-tuning.",
"3. Influence downstream language generation.",
"What do LM representations encode?",
"This pa-per's investigation of state representations builds on a large body of past work aimed at understanding how other linguistic phenomena are represented in large-scale language models.",
"NLM representations have been found to encode syntactic categories, dependency relations, and coref-erence information (Tenney et al., 2019; Hewitt and Manning, 2019; Clark et al., 2019).",
"Within the realm of semantics, existing work has identi-fied representations of word meaning (e.g., fine-grained word senses; Wiedemann et al. 2019) and predicateargument structures like frames and semantic roles (Kovaleva et al., 2019).",
"In all these studies, the main experimental paradigm is probing (Shi et al., 2016; Belinkov and Glass, 2019): given a fixed source of representations (e.g. the BERT language model; Devlin et al. 2019) and a linguistic label of interest (e.g. semantic role), a low-capacity probe (e.g a linear classifier) is trained to predict the label from the representations (e.g. to predict semantic roles from BERT embeddings).",
"A phenomenon is judged to be encoded by a model if the probe's accuracy cannot be explained by its accuracy when trained on control tasks (Hewitt and Liang, 2019) or baseline models (Pimentel et al., 2020).",
"Our work extends this experimental paradigm to a new class of semantic phenomena.",
"As in past work, we train probes to recover semantic annotations, and interpret these probes by comparison to null hypotheses that test the role of the model and the difficulty of the task.",
"The key distinction is that we aim to recover a representation of the situation described by a discourse rather than representations of the sentences that make up the discourse .",
"For example, in Fig. 1, we aim to understand not only whether NLMs encode the (sentence-level) semantic information that there was a picking up event whose patient was you and whose agent was the key we also wish to understand whether LMs encode the consequences of this action for all entities under discussion, including the chest from which the key was (implicitly) taken.",
"encodings of entities and situations must begin with a formal framework for representing them.",
"This is the subject of dynamic semantics in linguistics (Heim, 2008; Kamp et al., 2010; Groenendijk and Stokhof, 1991).",
"The central tool for representing meaning in these approaches is the information state : the set of possible states of the world consistent with a discourse ( I 0 and I 1 in Fig. 2).",
"Before anything is said, all logically consistent situations are part of the information state ( I 0 ).",
"Each new sentence in a discourse provides an update (that constrains or otherwise manipulates the set of possible situations).",
"As shown in the figure, these updates can affect even unmentioned entities: the sentence the only thing in the chest is a key ensures that the proposition contains(chest, x ) is false for all entities x other than the key.",
"This is formalized in 3 below.",
"2 The main hypothesis explored in this paper is that LMs represent (a particular class of) information states .",
"Given an LM trained on text alone, and a discourse annotated post-hoc with information states, our probes will try to recover these information states from LM representations.",
"The semantics literature includes a variety of proposals for how information states should be represented; here, we will represent information states logically, and decode information states via the truth values that they assign to logical propositions ( i,j in Fig. 2).",
"3 2 See also Yalcin (2014) for an introductory survey.",
"LMs and other semantic phenomena In addition to work on interpretability, a great deal of past research uses language modeling as a pretraining scheme for more conventional (supervised) semantics tasks in NLP.",
"LM pretraining is useful for semantic parsing (Einolghozati et al., 2019), instruction following (Hill et al., 2020), and even image retrieval (Ilharco et al., 2021).",
"Here, our primary objective is not good performance on downstream tasks, but rather understanding of representations themselves.",
"LM pretraining has also been found to be useful for tasks like factoid question answering (Petroni et al., 2019; Roberts et al., 2020).",
"Our experiments do not explore the extent to which LMs encode static background knowledge, but instead the extent to which they can build representations of novel situations described by novel text.",
"Overview We train probing models to test whether NLMs represent the information states specified by the input text.",
"We specifically probe for the truth values of logical propositions about entities mentioned in the text.",
"For example, in Figure 1, we test whether a representation of sentences",
"(a)(b) encodes the fact that empty(chest) is true and contains(chest, key) is false.",
"Meanings as information states To formalize this: given a universe consisting of a set of entities, properties, and relations, we define a situation as a complete specification of the properties and relations of each entity.",
"For example, the box labeled I 0 in Fig. 2 shows three situations involving a chest , a key , an apple , an eaten property and a contains relation.",
"In one situation, the chest contains the key and the apple is eaten.",
"In another, the chest contains the apple, and the apple is not eaten.",
"In general, a situation assigns a value of true or false to every logical proposition of the form P ( x ) or R ( x, y ) (e.g. locked ( door ) or contains ( chest , key )).",
"Now, given a natural language discourse, we can view that discourse as specifying a set of possible situations.",
"In Fig. 2, the sentence x 0 picks out the subset of situations in which the chest contains the key.",
"A collection of situations is called an information state , because it encodes a listener's semantics is a precise treatment of quantification and scope at the discourse level.",
"The tasks investigated in this paper do not involve any interesting quantification, and rely on the simplest parts of the formalism.",
"More detailed exploration of quantification in NLMs is an important topic for future study.",
"knowledge of (and uncertainty about) the state of the world resulting from the events described in a discourse.",
"4 In a given information state, the value of a proposition might be true in all situations, false in all situations, or unknown : true in some but false in others.",
"An information state (or an NLM representation) can thus be characterized by the label it assigns to every proposition.",
"For each i , the information state I i that results from the sentences x 1: i .",
"We write I ( ) { T, F, ?",
"} for the value of the proposition in the information state I .",
"A language model encoder E that maps sentences to sequences of d -dimensional word representations.",
"To characterize the encoding of semantic information in E ( x ) , we design a semantic probe that tries to recover the contents of I i from E ( x 1: i ) proposition-by-proposition.",
"Intuitively, this probe aims to answer three questions: (1) How is the truth value of a given proposition encoded?",
"(Linearly? Nonlinearly? In what feature basis?) (2) Where is information about encoded?",
"(Distributed across all token embeddings? Local to particular tokens?) (3) How well is semantic information encoded?",
"(Can it be recovered better than chance? Perfectly?) 4 An individual sentence is associated with a context change potential : a map from information states to information states.",
"The probe is built from three components, each of which corresponds to one of the questions above: 1. A proposition embedder embed : L R d (where L is the set of logical propositions).",
"2. A localizer loc : L R d R d which extracts and aggregates LM representations as candidates for encoding .",
"The localizer extracts tokens of E ( x ) at positions corresponding to particular tokens in the underlying text x .",
"We express this in notation as E ( x )[ * ] , where * is a subsequence of x .",
"(For example, if x = the third beaker is empty .",
"E ( x ) = [ v 1 , v 2 , v 3 , v 4 , v 5 ] has one vector per token.",
"E ( x )[ third beaker ] = [ v 2 , v 3 ]",
".) 3. A classifier cls : R d R d { T, F, ?",
"} , which takes an embedded proposition and a localized embedding, and predicts the truth value of the proposition.",
"We say that a proposition is encoded by E ( x ) if: cls ( embed ( ) , loc ( , E ( x ))) = I ( ) .",
"Given a dataset of discourses D , we attempt to find a classifier parameters from which all propositions can be recovered for all sentences in Eq.",
"(1).",
"To do so, we label each with the truth/falsehood of every relevant proposition.",
"We then train the parameters of a cls on a subset of these propositions and test whether it generalizes to held-out discourses.",
"Our experiments aim to discover to what extent (and in what manner) information states are encoded in NLM representations.",
"We first present a specific instantiation of the probe that allows us to determine how well information states are encoded in two NLMs and two datasets (4.2); then provide a more detailed look at where specific propositions are encoded by varying loc (4.3).",
"Finally, we describe an experiment investigating the causal role of semantic representations by directly manipulating E ( x ) (4.4).",
"5 4.1 Preliminaries Model In all experiments, the encoder E comes from a BART (Lewis et al., 2020) or T5 (Raffel et al., 2020) model.",
"Except where noted, BART is pretrained on OpenWebText, BookCorpus, CC-News, and Stories (Lewis et al., 2020), T5 is pretrained on C4 (Raffel et al., 2020), and both are fine-tuned on the TextWorld or Alchemy datasets described below.",
"Weights of E are frozen during probe training.",
"Data: Alchemy Alchemy, the first dataset used in our experiments, is derived from the SCONE (Long et al., 2016) semantic parsing tasks.",
"We preserve the train / development split from the original dataset (3657 train / 245 development).",
"Every example in the dataset consists of a human-generated sequence of instructions to drain, pour, or mix a beaker full of colored liquid.",
"Each instruction is annotated with the ground-truth state that results from following that instruction (Figure 3).",
"We turn Alchemy into a language modeling dataset by prepending a declaration of the initial state (the initial contents of each beaker) to the actions.",
"The initial state declaration always follows a fixed form ( the first beaker has [amount] [color] , the second beaker has [amount] [color] , ... ).",
"Including it in the context provides enough information that it is (in principle) possible to deterministically compute the contents of each beaker after each instruction.",
"The NLM is trained to predict the next instruction based on a textual description of the initial state and previous instructions.",
"The state representations we probe for in Alchemy describe the contents of each beaker.",
"Because execution is deterministic and the initial state 5 Sections here are also discussed in more detail in Appendix A.1 (for 4.1), A.2 (for 4.2), and A.3 (for 4.3).",
"is fully specified, the information state associated with each instruction prefix consists of only a single possible situation, defined by a set of propositions: = (cid:8) hasv c ( b ) : b { beaker 1 , beaker 2 , . . . } , v 1",
"..",
"4 , c { red , orange , yellow , . . . } (cid:9) .",
"(2) In the experiments below, it will be useful to have access to a natural language representation of each proposition.",
"We denote this: NL ( hasv c ( b )) = the b beaker has v c .",
"Truth values for each proposition in each instruction sequence are straightforwardly derived from ground-truth state annotations in the dataset.",
"Data: TextWorld TextWorld (Cote et al., 2018) is a platform for generating synthetic worlds for text-based games, used to test RL agents.",
"The game generator produces rooms containing objects, surfaces, and containers, which the agent can interact with in various predefined ways.",
"We turn TextWorld into a language modeling dataset by generating random game rollouts following the simple game challenge, which samples world states with a fixed room layout but changing object configurations.",
"For training, we sample 4000 rollouts across 79 worlds, and for development, we sample 500 rollouts across 9 worlds.",
"Contexts begin with a description of the room that the player currently stands in, and all visible objects in that room.",
"This is followed by a series of actions (preceded by > ) and game responses (Fig. 3).",
"The NLM is trained to generate both an action and a game response from a history of interactions.",
"We probe for both the properties of and relations between entities at the end of a sequence of actions.",
"Unlike Alchemy, these may be undetermined, as the agent may not have explored the entire environment by the end of an action sequence.",
"(For example, in Fig. 3, the truth value of matches(old key, door) is unknown ).",
"The set of propositions available in the TextWorld domain has form = { p ( o ) : o O, p P } { r ( o 1 , o 2 ) : o 1 , o 2 O, r R } (4) for objects O = { player , chest , . . . } , properties P = { open , edible , . . . } and relations R = Alchemy TextWorld State EM Entity EM State EM Entity EM BART T5 BART T5 BART T5 BART T5 main probe (4.2) 7.6 14.3 75.0 75.5 48.7 53.8 95.2 96.9 +pretrain, -fine-tune 1.1 4.3 69.3 74.1 23.2 38.9 91.1 94.3 baselines & -pretrain, +fine-tune 1.5 62.8 14.4 81.2 model ablations random init.",
"{ on , in , . . . } .",
"We convert propositions to natural language descriptions as: NL ( p ( o )) = the p is o NL ( r ( o 1 , o 2 )) = the o 1 is r o 2 .",
"(5) The set of propositions and their natural language descriptions are pre-defined by TextWorld's simulation engine.",
"The simulation engine also gives us the set of true propositions, from which we can compute the set of false and unknown propositions.",
"Evaluation We evaluate probes according to two metrics.",
"Entity Exact-Match (EM) first aggregates the propositions by entity or entity pair , then counts the percentage of entities for which all propositions were correctly labeled.",
"State EM aggregates propositions by information state (i.e. context), then counts the percentage of states for which all facts were correctly labeled.",
"With this setup in place, we are ready to ask our first question: is semantic state information encoded at all by pretrained LMs fine-tuned on Alchemy and TextWorld?",
"We instantiate the probing experiment defined in 3 as follows: The proposition embedder converts each proposition to its natural language description, embeds it using the same LM encoder that is being probed, then averages the tokens: embed ( ) = mean ( E ( NL ( ))) (6) The localizer associates each proposition with specific tokens corresponding to the entity or entities that describes, then averages these tokens.",
"In Alchemy, we average over tokens in the initial description of the beaker in question.",
"For example, let x be the discourse in Figure 3 (left) and be a proposition about the first beaker.",
"Then, e.g., loc ( has-1-red(beaker 1) , E ( x )) = mean ( E ( x )[ The first beaker has 2 green, ]) .",
"(7) In TextWorld, we average over tokens in all mentions of each entity.",
"Letting x be the discourse in Figure 3 (right), we have: loc ( locked(wooden door) , E ( x )) = mean ( E ( x )[ wooden door ]) .",
"Relations, with two arguments, are localized by taking the mean of the two mentions.",
"Finally, the classifier is a linear model which maps each NLM representation and proposition to a truth value.",
"In Alchemy, a linear transformation is applied to the NLM representation, and then the proposition with the maximum dot product with that vector is labelled T (the rest are labelled F ).",
"In TextWorld, a bilinear transformation maps each (proposition embedding, NLM representation) pair to a distribution over { T, F, ?",
"} .",
"As noted by Liang and Potts (2015), it is easy to construct examples of semantic judgments that cannot be expressed as linear functions of purely syntactic sentence representations.",
"We expect (and verify with ablation experiments) that this probe is not expressive enough to compute information states directly from surface forms, and only expressive enough to read out state information already computed by the underlying LM.",
"Results Results are shown in Table 1. A probe on T5 can exactly recover 14.3% of information states in Alchemy, and 53.8% in TextWorld.",
"For context, we compare to two baselines : a no LM baseline, which simply predicts the most frequent final state for each entity, and a no change baseline, which predicts that the entity's final state in the discourse will be the same as its initial state.",
"The no LM baseline is correct 0% / 1.8% of the time and the no change baseline is correct 0% / 9.7% of the timesubstantially lower than the main probe.",
"To verify that this predictability is a property of the NLM representations rather than the text itself, we apply our probe to a series of model ablations .",
"First, we evaluate a randomly initialized transformer rather than the pretrained and fine-tuned model, which has much lower probe accuracy.",
"To determine whether the advantage is conferred by LM pretraining or fine-tuning, we ablate either open-domain pretraining, in a -pretrain,+fine-tune ablation, or in-domain fine-tuning, in a +pretrain,-fine-tune ablation.",
"(For all experiments not using a pretrained model checkpoint, we experimented with both a BART-like and T5-like choice of depth and hidden size, and found that the BART-like model performed better.)",
"While both fine-tuning and pretraining contribute to the final probe accuracy, pretraining appears to play a much larger role: semantic state can be recovered well from models with no in-domain fine-tuning.",
"Finally, we note that there may be lexical overlap between the discourse and natural language descriptions of propositions.",
"How much of the probe's performance can be attributed to this overlap?",
"In Alchemy, the no change baseline (which State EM Entity EM BART T5 BART T5 remap 50.2 50.4 88.9 93.2 main probe 50.2 53.8 91.3 94.6 Table 2: Locality of information state in TextWorld (T5).",
"performs much worse than our probe) also acts as a lexical overlap baselinethere will be lexical overlap between true propositions and the initial state declaration only if the beaker state is unchanged.",
"In TextWorld, each action induces multiple updates, but can at most overlap with one of its affected propositions (e.g. You close the chest causes closed(chest) and open(chest) , but only overlaps with the former).",
"Moreover, only 50% of actions have lexical overlap with any propositions at all.",
"Thus, lexical overlap cannot fully explain probe performance in either domain.",
"The experiment in 4.2 assumed that entity state could be recovered from a fixed set of input tokens.",
"Next, we conduct a more detailed investigation into where state information is localized.",
"To this end, we ask two questions: first, can we assume state information is localized in the corresponding entity mentions, and second, if so, which mention encodes the most information, and what kind of information does it encode?",
"We first contrast tokens within mentions of the target entity to tokens elsewhere in the input discourse.",
"In Alchemy, each beaker b 's initial state declaration is tokenized as: toks b = { the b , [position] b , be b , aker b , has b , [volume] b , [color] b , , b } , where b signifies the beaker position.",
"Rather than pooling these tokens together (as in 4.2), we construct a localizer ablation that associates beaker b 's state with single tokens t in either the initial mention of beaker b , or the initial mention of other beakers at an integer offset .",
"For each ( t, ) pair, we construct a localizer that matches propositions about beaker b with t b + .",
"For example, the ( has , +1) localizer associates the third beaker's final state with the vector in E ( x ) at the position of the has token in the fourth beaker has 2 red .",
"In TextWorld, which does not have such easily categorizable tokens, we investigate whether information about the state of an entity is encoded in mentions of different entities .",
"We sample a random mapping remap between entities, and construct a localizer ablation in which we decode propositions about w from mentions of remap ( w ) .",
"For example, we probe the value of open(chest) from mentions of old key .",
"These experiments use a different evaluation setwe restrict evaluation to the subset of entities for which both w and remap ( w ) appear in the discourse.",
"For comparability, we re-run the main probe on this restricted set.",
"6 Results Fig. 4 shows the locality of BART and T5 in the Alchemy domain.",
"Entity EM is highest for words corresponding to the correct beaker, and specifically for color words.",
"Decoding from any token of an incorrect beaker barely outperforms the no LM baseline (32.4% entity EM).",
"In TextWorld, Table 2 shows that decoding from a remapped entity is only 1-3% worse than decoding from the right one.",
"Thus, the state of an entity e is (roughly) localized to tokens in mentions of e , though the degree of locality is dataand model-dependent.",
"To investigate facts encoded in different mentions of the entity in question, we experiment with decoding from the first and last mentions of the entities in x .",
"The form of the localizer is the same as 4.2, except instead of averaging across all mentions of entities, we use the first mention or the last mention.",
"We also ask whether relational propositions can be decoded from just one argument (e.g., in(old key, chest) from just mentions of old key , rather than the averaged encodings of old key and chest ).",
"Results As shown in Table 1, in TextWorld, probing the last mention gives the highest accuracy.",
"Furthermore, as Table 3 shows, relational facts can be decoded from either side of the relation .",
"The localization experiments in Section 4.3 indicate that state information is localized within con-6",
"textual representations in predictable ways.",
"This suggests that modifying the representations themselves could induce systematic and predictable changes in model behavior.",
"We conduct a series of causal intervention experiments in the Alchemy domain which measure effect of manipulating encoder representations on NLM output.",
"We replace a small subset of token representations with those from a different information state, and show that this causes the model to behave as if it were in the new information state.",
"7 A diagram of the procedure is shown in Fig. 5. We create two discourses, x 1 and x 2 , in which one beaker's final volume is zero.",
"Both discourses describe the same initial state, but for each x i , we append the sentence drain v i from beaker b i , where v i is the initial volume of beaker b i 's contents.",
"Though the underlying initial state tokens are the same, we expect the contextualized representation C 1 = E ( x 1 )[ the i th beaker . . . ] to differ from C 2 = E ( x 2 )[ the i th beaker . . . ] due to the different final states of the beakers.",
"Let CONT ( x ) denote the set of sentences constituting semantically acceptable continuations of a discourse prefix x .",
"(In Fig. 1, CONT ( a, b ) contains c 1 and c 2 but not c 3",
".) 8 In Alchemy, CONT ( x 1 ) should not contain mixing, draining, or pouring actions involving b 1 (similarly for CONT ( x 2 ) and b 2 ).",
"Decoder samples given C i should fall into CONT ( x i ) .",
"Finally, we replace the encoded description of beaker 2 in C 1 with its encoding from C 2 , creating a new representation C mix .",
"C mix was not derived from any real input text, but implicitly represents a situation in which both b 1 and b 2 are empty.",
"A decoder generating from C mix should generate instructions in CONT ( x 1 ) CONT ( x 2 ) to be consistent with this situation.",
"7 This experiment is inspired by Geiger et al. (2020).",
"8 In order to automate evaluation of consistency, we use a version of Alchemy with synthetically generated text.",
"The underlying LM has also been fine-tuned on synthetic data.",
"Results We generate instructions conditioned on C mix and check whether they are in the expected sets.",
"Results, shown in Table 4, align with this prediction.",
"For both BART and T5, substantially more generations from C mix fall within CONT ( x 1 ) CONT ( x 2 ) than from C 1 or C 2 .",
"Though imperfect (compared to C 1 generations within CONT ( x 1 ) and C 2 generations within CONT ( x 2 ) ), this suggests that the information state associated with the synthetic encoding C mix is (approximately) one in which both beakers are empty.",
"representations are imperfect: even in the best case, complete information states can only be recovered 53.8% of the time in tasks that most humans would find very simple.",
"(Additional experiments described in Appendix A.5 offer more detail about these errors.)",
"The success of our probing experiments should not be taken to indicate that the discovered semantic representations have anything near the expressiveness needed to support human-like generation.",
"...of our experimental paradigm: While our probing experiments in 4.2 provide a detailed picture of structured state representations in NLMs, the intervention experiments in 4.4 explain the relationship between these state representations and model behavior in only a very general sense.",
"They leave open the key question of whether errors in language model prediction are attributable to errors in the underlying state representation.",
"Finally, the situations we model here are extremely simple, featuring just a handful of objects.",
"Thought experiments on the theoretical capabilities of NLMs (e.g. Bender and Koller's coconut catapult) involve far richer worlds and more complex interactions.",
"Again, we leave for future work the question of whether current models can learn to represent them.",
"Even when trained only on language data, NLMs encode simple representations of meaning.",
"In experiments on two domains, internal representations of text produced by two pretrained language models can be mapped, using a linear probe, to representations of the state of the world described by the text.",
"These internal representations are structured, interpretably localized, and editable.",
"This finding has important implications for research aimed at improving factuality and and coherence in NLMs: future work might probe LMs for the the states and properties ascribed to entities the first time they are mentioned (which may reveal biases learned from training data; Bender et al. 2021), or correct errors in generation by directly editing representations.",
"Thanks to Ekin Akyurek, Evan Hernandez, Joe O'Connor, and the anonymous reviewers for feedback on early versions of this paper.",
"MN is supported by a NSF Graduate Research Fellowship.",
"This work was supported by a hardware donation from NVIDIA under the NVAIL program.",
"This paper investigates the extent to which neural language models build meaning representations of the world, and introduces a method to probe and modify the underlying information state.",
"We expect this can be applied to improve factuality, coherence, and reduce bias and toxicity in language model generations.",
"Moreover, deeper insight into how neural language models work and what exactly they encode can be important when deploying these models in real-world settings.",
"However, interpretability research is by nature dual-use and improve the effectiveness of models for generating false, misleading, or abusive language.",
"Even when not deliberately tailored to generation of harmful language, learned semantic representations might not accurately represent the world because of errors both in prediction (as discussed in 5) and in training data."
] | [
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"objective",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"As an important task in sentiment analysis, Multimodal Aspect-Based Sentiment Analysis (MABSA) has attracted increasing attention in recent years.",
"However, previous approaches either",
"(i) use separately pre-trained visual and textual models, which ignore the cross-modal alignment or",
"(ii) use vision-language models pre-trained with general pre-training tasks, which are inadequate to identify fine-grained aspects, opinions, and their alignments across modalities.",
"To tackle these limitations, we propose a task-specific Vision-Language Pre-training framework for MABSA (VLP-MABSA), which is a unified multimodal encoder-decoder architecture for all the pretraining and downstream tasks.",
"We further design three types of task-specific pre-training tasks from the language, vision, and multimodal modalities, respectively.",
"Experimental results show that our approach generally outperforms the state-of-the-art approaches on three MABSA subtasks.",
"Further analysis demonstrates the effectiveness of each pretraining task.",
"The source code is publicly released at https://github.com/NUSTM/ VLP-MABSA .",
"Recent years have witnessed increasing attention on the Multimodal Aspect-Based Sentiment Analysis (MABSA) task 1 .",
"Previous research mostly focused on its two subtasks, including Multimodal Aspect Term Extraction (MATE) and Multimodal Aspect-oriented Sentiment Classification (MASC).",
"Given a text-image pair as input, MATE aims to extract all the aspect terms mentioned in the text (Zhang et al., 2018; Lu et al., 2018; Wu et al., 2020a,b; Zhang et al., 2021a), whereas MASC Corresponding authors.",
"aims to classify the sentiment towards each extracted aspect term (Xu et al., 2019; Yu and Jiang, 2019; Khan and Fu, 2021).",
"As the two subtasks are closely related to each other, Ju et al. (2021) recently introduced the Joint Multimodal Aspect-Sentiment Analysis (JMASA) task, aiming to jointly extract the aspect terms and their corresponding sentiments.",
"For example, given the text-image pair in Table.",
"1, the goal of JMASA is to identify all the aspect-sentiment pairs, i.e., ( Sergio Ramos , Positive ) and ( UCL , Neutral ).",
"Most of the aforementioned studies to MABSA primarily focused on employing pre-trained unimodal models (e.g., BERT for text and ResNet for image) to obtain textual and visual features respectively.",
"The separate pre-training of visual and textual features ignores the alignment between text and image.",
"It is therefore crucial to perform vision-language pre-training to capture such cross-modal alignment.",
"However, for the MABSA task, the studies on vision-language pre-training are still lacking.",
"To the best of our knowledge, there are very few studies focusing on vision-language pre-training for one of the MABSA subtasks, i.e., MATE (Sun et al., 2020, 2021).",
"One major drawback of these studies is that they mainly employ general vision-language understanding tasks (e.g., text-image 2149 matching and masked language modeling) to capture text-image alignments.",
"Such general pretraining is inadequate to identify fine-grained aspects, opinions, and their alignments across the language and vision modalities.",
"Therefore, it is important to design task-specific vision-language pre-training, to model aspects, opinions, and their alignments for the MABSA task.",
"To address this issue, in this paper, we propose a task-specific Vision-Language Pre-training framework for Multimodal Aspect-Based Sentiment Analysis.",
"Specifically, inspired by the recent success of BART-based generative models in text-based ABSA (Yan et al., 2021), we first construct a generative multimodal architecture based on BART (Lewis et al., 2020), for both vision-language pre-training and the downstream MABSA tasks.",
"We then propose three types of vision-language pre-training tasks, including Masked Language Modeling (MLM) and Textual Aspect-Opinion Extraction (AOE) from the language modality, Masked Region Modeling (MRM) and Visual Aspect-Opinion Generation (AOG) from the vision modality, and Multimodal Sentiment Prediction (MSP) across two modalities.",
"Figure 1 illustrates the whole framework of our proposed pre-training approach.",
"Compared with general pre-training methods, our task-specific pretraining approach incorporates multimodal aspect, opinion, and sentiment supervision, which guides pre-trained models to capture important objective and subjective information for the MABSA task.",
"To evaluate the effectiveness of our pre-training approach, we adopt MVSA-Multi, a widely-used Multimodal Twitter dataset for coarse-grained text-image sentiment analysis (Niu et al., 2016), as our pre-training dataset.",
"We then employ several representative pre-trained models and rule-based methods to obtain the aspect and opinion supervision for our AOE and AOG tasks.",
"As the dataset provides sentiment labels for each multimodal tweet, we adopt them as the supervision for our MSP task.",
"Our contributions in this work are as follows: We introduce a task-specific Vision-Language Pre-training framework for MABSA named VLP-MABSA, which is a unified multimodal encoder-decoder architecture for all the pretraining and downstream tasks.",
"Apart from the general MLM and MRM tasks, we further introduce three task-specific pretraining tasks, including Textual Aspect-Opinion Extraction, Visual Aspect-Opinion Generation, and Multimodal Sentiment Prediction, to identify fine-grained aspect, opinions, and their cross-modal alignments.",
"Experiments on three MABSA subtasks show that our pre-training approach generally obtains significant performance gains over the state-of-the-art methods.",
"Further analysis on supervised and weakly-supervised settings demonstrates the effectiveness of each pre-training task.",
"Vision-Language Pre-training.",
"Inspired by the success of pre-trained language models like BERT (Devlin et al., 2019), many multimodal pre-training models have been proposed (Chen et al., 2020b; Yu et al., 2021; Zhang et al., 2021b) to perform many vision-language tasks which achieve fantastic success.",
"Correspondingly, many general pre-training tasks are proposed, such as Masked Language Modeling (MLM), Masked Region Modeling (MRM) and Image-Text Matching (ITM) (Chen et al., 2020b; Yu et al., 2021).",
"Besides, in order to make the pre-trained models better understand downstream tasks, researchers also design task-specific pre-training models for different downstream tasks (Hao et al., 2020; Xing et al., 2021).",
"In our work, apart from the popular general pre-training tasks, we also design three kinds of task-specific pre-training tasks for the MABSA task.",
"Text-based Joint Aspect-Sentiment Analysis (JASA).",
"JASA aims to extract aspect terms in the text and predict their sentiment polarities.",
"Many approaches have been proposed including pipeline approaches (Zhang et al., 2015; Hu et al., 2019), multi-task learning approaches (He et al., 2019; Hu et al., 2019) and collapsed label-based approaches (Li et al., 2019; Hu et al., 2019; Chen et al., 2020a).",
"Recently, Yan et al. (2021) proposed a unified generative framework which achieves highly competitive performance on several benchmark datasets for JASA.",
"Multimodal Sentiment Analysis.",
"Multimodal Sentiment Analysis (MSA) in social media posts is an important direction of sentiment analysis.",
"Many neural network approaches have been proposed to perform the coarse-grained MSA in the literature, which aim to detect the overall sentiment of each input social post (You et al., 2015, 2016; Luo et al., 2017; Xu et al., 2018; Yang et al., 2021b).",
"Different 2150 Visual Aspect-Opinion Generation <bos> <msp> BART Decoder Positive Multimodal Sentiment Prediction BART Decoder <bos> <mlm> Best <eos> Masked Language Modeling concert Best concert Justin BART Decoder <bos> <aoe> Bieber <sep> Textual Aspect-Opinion Extraction 5 6 <sep> <eos> best 1 1 best guy <img> </img> Faster R-CNN Token Embedding Token Embedding <bos> Best <mask> opening Justin <eos> BART Encoder 1 2 3 4 5 6 Position index: e <img> v 1 v zero v 3 v 36 e </img> e <bos> e Best e <mask> e opening e of e Justin e Bieber e ! of Textual Pre-training <bos> <aog> BART Decoder <bos><mrm> Masked Region Modeling BART Decoder <feat><zero><feat> handsomeguy handsome <eos> Visual Pre-training Multimodal Pre-training Bieber 7 ! e <eos> Token Embedding Figure 1: Overview of our Vision-Language Pre-Training framework for MABSA from these studies, our work focuses on the fine-grained MABSA task, which aims to identify the sentiments towards all the aspects mentioned in each input social post.",
"Multimodal Aspect-Based Sentiment Analysis.",
"As an important sentiment analysis task, many approaches have been approached to tackle the three subtasks of MABSA, including Multimodal Aspect Term Extraction (Zhang et al., 2018; Yu et al., 2020b; Wu et al., 2020a,b; Sun et al., 2020; Zhang et al., 2021a), Multimodal Aspect Sentiment Classification (Xu et al., 2019; Yu et al., 2020a; Yang et al., 2021a; Khan and Fu, 2021) and Joint Multimodal Aspect-Sentiment Analysis (Ju et al., 2021).",
"In this work, we aim to propose a general pre-training framework to improve the performance of all the three subtasks.",
"Figure 1 shows the overview of our model architecture.",
"The backbone of our model is BART (Lewis et al., 2020), which is a denoising autoencoder for sequence-to-sequence models.",
"We extend BART to encode both textual and visual inputs, and decode pre-training and downstream tasks from different modalities.",
"In the following subsections, we first introduce our feature extractor, and then illustrate the encoder and decoder of our model, followed by describing the details of three types of pre-training tasks and downstream MABSA tasks.",
"Image Representation.",
"Following many existing Vision-Language pre-training models (Chen et al., 2020b; Yu et al., 2021), we employ Faster R-CNN (Anderson et al., 2018) to extract visual features.",
"Specifically, we adopt Faster R-CNN to extract all the candidate regions from an input image.",
"We then only retain 36 regions with the highest confidence.",
"Meanwhile, we also keep the semantic class distribution of each region, which will be used for the Masked Region Modeling task.",
"For the retained regions, we use mean-pooled convolutional features processed by Faster R-CNN as our visual features.",
"Let us use R = { r 1 , ..., r 36 } to denote the visual features, where r i R 2048 refers to the visual feature of the i -th region.",
"To be consistent with the text representation, we adopt a linear transformation layer to project visual features to d -dimensional vectors, denoted by V R d 36 .",
"Text Representation.",
"For text input, we first tokenize the text and then feed tokens to the embedding matrix.",
"The embeddings of text tokens are used as text features.",
"Let us use E = { e 1 , ..., e T } to denote the token indexes of text inputs where T denotes the length of the input text, and W = { w 1 , ..., w T } to denote the embeddings of tokens.",
"We employ a BART-based generative framework for both vision-language pre-training and downstream MABSA tasks.",
"Encoder.",
"The encoder of our model is a multilayer bidirectional Transformer.",
"As shown in Figure 1, to distinguish inputs of different modalities, we follow Xing et al. (2021) by using (cid:104) img (cid:105) and (cid:104) /img (cid:105) to indicate the start and the end of visual features, and (cid:104) bos (cid:105) and (cid:104) eos (cid:105) to indicate the textual input.",
"In the following part of the paper, we denote the concatenated multimodal input by X .",
"decoder is unidirectional when generating outputs, while the encoder is bidirectional.",
"Since all pretraining tasks share the same decoder, we insert two special tokens at the beginning of the inputs of the decoder to indicate different pre-training tasks.",
"Following Yan et al. (2021), we insert a special token (cid:104) bos (cid:105) to indicate the beginning of generation, and then insert a task-specific special token to indicate the task type.",
"Specifically, the special tokens for Masked Language Modeling, Textual Aspect-Opinion Extraction, Masked Region Modeling, Visual Aspect-Opinion Generation, and Multimodal Sentiment Prediction are (cid:104) bos (cid:105)(cid:104) mlm (cid:105) , (cid:104) bos (cid:105)(cid:104) aoe (cid:105) , (cid:104) bos (cid:105)(cid:104) mrm (cid:105) , (cid:104) bos (cid:105)(cid:104) aog (cid:105) , and (cid:104) bos (cid:105)(cid:104) msp (cid:105) , respectively.",
"The dataset we use for pre-training is MVSA-Multi (Niu et al., 2016), which is widely used in Multimodal Twitter Sentiment Analysis (Yadav and Vishwakarma, 2020; Yang et al., 2021b).",
"This dataset provides image-text input pairs and coarse-grained sentiments of image-text pairs.",
"Statistics of the dataset are given in Table",
"2. With the dataset, we design three types of pretraining tasks, including textual, visual, and multimodal pre-training as follows.",
"Textual Pre-training contains two tasks: a general Masked Language Modeling task to build alignment between textual and visual features and a task-specific Textual Aspect-Opinion Extraction task to extract aspects and opinions from text.",
"Masked Language Modeling (MLM).",
"In the MLM pre-training task, we use the same strategy as BERT (Devlin et al., 2019) by randomly masking the input text tokens with a probability of 15%.",
"The goal of the MLM task is to generate the original text based on the image and the masked text, and thus the loss function of the MLM task is: LMLM = EX DT (cid:88) i =1 log P ( e i | e < i , X ) , (1) where e i and X denote the i th token of the input text and the masked multimodal input, respectively.",
"T is the length of input text.",
"Textual Aspect-Opinion Extraction (AOE).",
"The AOE task aims to extract aspect and opinion terms from the text.",
"Since the MVSA-Multi dataset does not provide annotations for aspect and opinion terms, we resort to a pre-trained model for aspect extraction and a rule-based method for opinion extraction.",
"Specifically, for aspect extraction, we employ the pre-trained model from a wellknown Named Entity Recognition (NER) tool for tweets (Ritter et al., 2011) to perform NER on each tweet in the dataset, and regard the recognized entities as aspect terms.",
"For opinion extraction, we utilize a widely-used sentiment lexicon named Senti-WordNet (Esuli and Sebastiani, 2006) to obtain the dictionary of opinion words.",
"Given each tweet, if its sub-sequences (i.e., words or phrases) match the words in the dictionary, we treat them as opinion terms.",
"These extracted aspect and opinion terms are used as the supervision signal of our AOE task.",
"With the textual aspect-opinion supervision, we follow Yan et al. (2021) by formulating the AOE task as an index generation task.",
"Given the input text as the source sequence, the goal is to generate a target index sequence which consists of the start and end indexes of all aspect and opinion terms.",
"Let us use Y = [ a s 1 , a e 1 , ..., a sM , a eM , (cid:104) sep (cid:105) , o s 1 , o e 1 , ..., o sN , o eN , (cid:104) eos (cid:105) ] to denote the target index sequence, where M and N are the number of aspect terms and opinion terms, a s , a e and o s , o e indicate the start and end indexes of an aspect term and an opinion term respectively, (cid:104) sep (cid:105) is used to separate aspect terms and opinion terms, and (cid:104) eos (cid:105) informs the end of extraction.",
"For example, as shown in Figure 1, the extracted aspect and opinion terms are Justin Bieber and best respectively, and the target sequence is Y =[5 , 6 , (cid:104) sep (cid:105) , 1 , 1 , (cid:104) eos (cid:105) ] .",
"For y t in the target sequence Y , it is either a position index or a special token (e.g., (cid:104) sep (cid:105) ).",
"We use C = [ (cid:104) sep (cid:105) , (cid:104) eos (cid:105) ] to denote the set of special tokens, and C d as their embeddings.",
"We assume that H e denotes the encoder output of the concatenated multimodal input, H e T denotes the textual part of H e , and H eV denotes the visual part of H e .",
"The decoder takes the multimodal encoder output H e and the previous decoder output Y <t as inputs, and predicts the token probability 2152 distribution P ( y t ) as follows: h dt = Decoder ( H e ; Y <t ) , (2) H eT = ( W + H eT ) / 2 , (3) P ( y t ) = Softmax ([ H eT ; C d ] h dt ) , (4) where W denotes the embeddings of input tokens.",
"The loss function of the AOE task is as follows: LAOE = EX D O (cid:88) t =1 log P ( y t | Y < t , X ) , (5) where O = 2 M + 2 N + 2 is the length of Y and X denotes the multimodal input.",
"Visual Pre-training contains two tasks: a general Masked Region Modeling task and a task-specific Visual Aspect-Opinion Generation task to capture",
"subjective and objective information in the image.",
"Masked Region Modeling (MRM).",
"Following Xing et al. (2021), our MRM task aims to predict the semantic class distribution of the masked region.",
"As shown in Figure 1, for the input of the encoder, we randomly mask image regions with a probability of 15%, which are replaced with zero vectors.",
"For the input of the decoder, we first add two special tokens (cid:104) bos (cid:105)(cid:104) mrm (cid:105) , and then represent each masked region with (cid:104) zero (cid:105) and each remaining region with (cid:104) feat (cid:105) .",
"After feeding the input to the decoder, an MLP classifier is stacked over the output of each (cid:104) zero (cid:105) to predict the semantic class distribution.",
"Let us use p ( v z ) to denote the predicted class distribution of the z -th masked region, and q ( v z ) to denote the class distribution detected by Faster R-CNN.",
"The loss function for MRM is to minimize the KL divergence of the two class distributions: LMRM = EX DZ (cid:88) z =1 DKL ( q ( v z ) || p ( v z )) , (6) where Z is the number of masked regions.",
"Visual Aspect-Opinion Generation (AOG).",
"The AOG task aims to generate the aspect-opinion pair detected from the input image.",
"In the field of Computer Vision, Borth et al. (2013) proposed to detect the visual sentiment concept, i.e., Adjective-Noun Pair (ANP) such as smiling man and beautiful landscape in the image.",
"Since the nouns and adjectives of ANP respectively capture the fine-grained aspects and opinions in the image, we regard ANPs as visual aspect-opinion pairs.",
"In order to detect the ANP of each input image, we adopt a pre-trained ANP detector DeepSentiBank 2 (Chen et al., 2014) to predict the class distribution over 2089 pre-defined ANPs.",
"The ANP with the highest probability is selected as the supervision signal of our AOG task.",
"For example, in Figure 1, the ANP detected from the input image is handsome guy , and we regard it as the supervision.",
"With the visual aspect-opinion supervision, we formulate the AOG task as a sequence generation task.",
"Specifically, let us use G = { g 1 , ..., g | G | } to denote the tokens of the target ANP and | G | to denote the number of ANP tokens.",
"The decoder then takes the multimodal encoder output H e and the previous decoder output G <i as inputs, and predicts the token probability distribution P ( g i ) : h di = Decoder ( H e ; G <i ) , (7) P ( g i ) = Softmax ( ET h d i ) , (8) where E denotes the embedding matrix of all tokens in the vocabulary.",
"The loss function of the AOG task is: LAOG = EX D | G | (cid:88) i =1 log P ( g i | g < i , X ) .",
"Multimodal Pre-training has one task named Multimodal Sentiment Prediction (MSP).",
"Different from the aforementioned pre-training tasks whose supervision signals only come from one modality, the supervision signals for MSP come from multimodality, which can enhance models to identify the subjective information in both language and vision and capture their rich alignments.",
"Multimodal Sentiment Prediction (MSP).",
"As the MVSA-Multi dataset provides the coarse-grained sentiment labels for all the text-image pairs, we use the sentiment labels as supervision signals of our MSP task.",
"Formally, we model the MSP task as a classification task, where we first feed the two special tokens (cid:104) bos (cid:105)(cid:104) msp (cid:105) to the decoder and then predict the sentiment distribution P ( s ) as follows: h dmsp = Decoder ( H e ; E msp ) , (10) P ( s ) = Softmax ( MLP ( h dmsp )) , (11) where E msp is the embeddings of two special tokens.",
"2 https://github.com/stephen-pilli/DeepSentiBank 2153 final <img> </img> <bos> <eos> Position index: Sergio 1 Ramos 2 10 UCL 9 Faster R-CNN Token Embedding Token Embedding BART Encoder BART Decoder <bos> <AESC> 1 2 POS 9 9 NEU <eos> Sergio Ramos POS UCL UCL NEU Token Embedding Figure 2: An example of downstream task JMASA.",
"We use the cross-entropy loss for the MSP task: LMSP = EX D log P ( s | X ) , (12) where s is the golden sentiment annotated in dataset.",
"To optimize all the model parameters, we adopt the alternating optimization strategy to iteratively optimize our five pre-training tasks.",
"The objective function is as follows: L = 1 LMLM + 2 LAOE + 3 LMRM + 4 LAOG + 5 LMSP (13) where 1 , 2 , 3 , 4 , and 5 are tradeoff hyper-parameters to control the contribution of each task.",
"We consider all the three subtasks in MABSA as our downstream tasks, including Joint Multimodal Aspect-Sentiment Analysis (JMASA), Multimodal Aspect Term Extraction (MATE), and Multimodal Aspect-oriented Sentiment Classification (MASC).",
"We model these downstream tasks based on the same BART-based generative framework in vision-language pre-training, so that the downstream task can benefit more from pre-training during the fine-tuning stage.",
"Following Yan et al. (2021), we formulate the outputs of the three subtasks as follows: JMASA: Y = [ a s 1 , a e 1 , s 1 , ..., a si , a ei , s i , ... ] , MATE: Y = [ a s 1 , a e 1 , ..., a si , a ei , ... ] , MASC: Y = [ a s 1 , a e 1 , s 1 , ..., a s i , a e i , s i , ... ] , where a si , a ei , and s i inform the start index, end index, and sentiment of an aspect term in the text.",
"The underlined tokens are given during inference.",
"Similar to the AOE task in Section 3.3.1, we formulate all the subtasks as index generation tasks, and use Eqn.",
"(2) to Eqn.",
"(4) to generate the token distribution.",
"The difference is that the special token set is modified as C = [ (cid:104) POS (cid:105) , (cid:104) NEU (cid:105) , (cid:104) NEG (cid:105) , (cid:104) EOS (cid:105) ] by adding the sentiment categories.",
"Figure 2 shows an example for JMASA.",
"Since the aspect-sentiment pairs are ( Sergio Ramos , Positive ) and ( UCL , Neutral ), its target sequence is [1 , 2 , (cid:104) POS (cid:105) , 9 , 9 , (cid:104) NEU (cid:105) , (cid:104) eos (cid:105) ] .",
"Downstream datsets.",
"We adopt two benchmark datasets annotated by Yu and Jiang (2019), namely TWITTER-2015 and TWITTER-2017 to evaluate our model.",
"The statistics of the two datasets are shown in Table",
"3. Implementation Details.",
"We employ BART-base (Lewis et al., 2020) as our framework.",
"Specifi-cally, the encoder and decoder both have six layers and are initialized with BART-base parameters.",
"We fix all the hyper-parameters after tuning them on the development set.",
"The pre-training tasks were trained for 40 epochs and the downstream tasks were fine-tuned for 35 epochs.",
"The batch sizes are set to 64 and 16, respectively.",
"The learning rate is set to 5e-5.",
"The hidden size of our model is set to 768, which is the same as BART.",
"The tradeoff hyper-parameters 1 , 2 , 3 , 4 , and 5 are all set to 1.",
"Note that for the subtask MASC, different from Ju et al. (2021) evaluating on the correctly predicted aspects, we provide all the golden aspects to the decoder of our framework during the inference stage and evaluate on all the aspects.",
"We implement all the models with PyTorch, and run experiments on a RTX3090 GPU.",
"Evaluation Metrics.",
"We evaluate our model over three subtasks of MABSA and adopt Micro-F1 score (F1), Precision (P) and Recall (R) as the evaluation metrics to measure the performance.",
"For MASC, to fairly compare with other approaches, 2154 TWITTER-2015 TWITTER-2017 P R F1 P R F1 Text-based methods SPAN 53.7 53.9 53.8 59.6 61.7 60.6 D-GCN 58.3 58.8 59.4 64.2 64.1 64.1 BART 62.9 65.0 63.9 65.2 65.6 65.4 Multimodal methods UMT+TomBERT 58.4 61.3 59.8 62.3 62.4 62.4 OSCGA+TomBERT 61.7 63.4 62.5 63.4 64.0 63.7 OSCGA-collapse 63.1 63.7 63.2 63.5 63.5 63.5 RpBERT-collapse 49.3 46.9 48.0 57.0 55.4 56.2 JML 65.0 63.2 64.1 66.5 65.5 66.0 VLP-MABSA 65.1 68.3 66.6 66.9 69.2 68.0 Table 4: Results of different approaches for JMASA.",
"In this section, we introduce four types of compared systems for different tasks.",
"Approaches for Multimodal Aspect Term Extraction (MATE).",
"1) RAN (Wu et al., 2020a), which aligns text with object regions by a co-attention network.",
"2) UMT (Yu et al., 2020b), which uses Cross-Modal Transformer to fuse text and image representations for Multimodal Named Entity Recognition (MNER).",
"3) OSCGA (Wu et al., 2020b), another MNER approach using visual objects as image representations.",
"4) RpBERT (Sun et al., 2021), which uses a multitask training model for MNER and image-text relation detection.",
"Approaches for Multimodal Aspect Sentiment Classification (MASC).",
"1) TomBERT (Yu and Jiang, 2019), which tackles the MASC task by employing BERT to capture intra-modality dynamics.",
"2) CapTrBERT (Khan and Fu, 2021), which translates the image to a caption as an auxiliary sentence for sentiment classification.",
"Text-based approaches for Joint Aspect-Sentiment Analysis (JASA).",
"1) SPAN (Hu et al., 2019), which formulates the JASA task as a span prediction problem.",
"2) D-GCN (Chen et al., 2020a), which proposes a directional graph convolutional network to capture the correlation between words.",
"3) BART (Yan et al., 2021), which adapts the JASA task to BART by formulating it as an index generation problem.",
"Multimodal approaches for Joint Multimodal Aspect-Sentiment Analysis (JMASA).",
"1) UMT+TomBERT and OSCGA+TomBERT , which are simple pipeline approaches by combining methods for subtasks mentioned above.",
"2) Methods TWITTER-2015 TWITTER-2017 P R F1 P R F1 RAN 80.5 81.5 81.0 90.7 90.7 90.0 UMT 77.8 81.7 79.7 86.7 86.8 86.7 OSCGA 81.7 82.1 81.9 90.2 90.7 90.4 JML-MATE 83.6 81.2 82.4 92.0 90.7 91.4 VLP-MABSA 83.6 87.9 85.7 90.8 92.6 91.7 Table 5: Results of different approaches for MATE.",
"UMT-collapsed (Yu et al., 2020b), OSCGA-collapsed (Wu et al., 2020b) and RpBERT-collapsed (Sun et al., 2021), which model the JMASA task with collapsed labels such as B-POS and I-POS .",
"3) JML (Ju et al., 2021), which is a multi-task learning approach proposed recently with the auxiliary cross-modal relation detection task.",
"In this section, we analyze the results of different",
"approaches on three subtasks of MABSA.",
"Results of JMASA.",
"Table 4 shows the results of different methods for JMASA.",
"As we can see from the table, BART achieves the best performance among text-based methods, and it even outperforms some multimodal methods, which proves the superiority of our base framework.",
"For multimodal methods, JML achieves better performance than previous methods mainly due to its auxiliary task about relation detection between image and text.",
"Among all the methods, VLP-MABSA which is the whole model with all the pre-training tasks consistently performs the best across two datasets.",
"Specifically, it significantly outperforms the second best system JML with 2.5 and 2.0 absolute percentage points with respect to F1 on TWITTER-2015 and TWITTER-2017, respectively.",
"This mainly benefits from our task-specific pre-training tasks, which identify aspects and opinions as well as their alignments across the two modalities.",
"spectively.",
"Similar to the trend on the JMASA subtask, we can clearly observe that our proposed approach VLP-MABSA generally achieves the best performance across the two datasets, except on the accuracy metric of TWITTER-2015.",
"These observations further demonstrate the general effectiveness of our proposed pre-training approach.",
"To explore the impact of each pre-training task, we perform a thorough ablation study over the full supervision setting which uses full training dataset and the weak supervision setting which only randomly chooses 200 training samples for fine-tuning.",
"Impact of Each Pre-training Task.",
"As we can see from Table 7, the performance generally improves with respect to most metrics when adding more pre-training tasks.",
"To better analyze the effect of each pre-training task, we take the weak supervision experiments on TWITTER-2015 as an example.",
"When only using MLM to pre-train our model, the performance only gets slight improvements.",
"After adding the AOE task, the result of MATE gets a huge improvement of 9.44% on F1 .",
"This shows that the AOE task greatly enhances our model's ability to recognize the aspect terms.",
"When adding the MRM task, the performance gets slight improvements again.",
"This reflects that general pre-training tasks (e.g., MLM and MRM) are not adequate for our model to tackle downstream tasks which need the model to understand the subjective and objective information from image and text.",
"When adding the AOG task, the performance over three subtasks gets a moderate improvement, which proves the effectiveness of 200 400 600 800 1000 1200 1400 1600 the number of samples 35 40 45 50 55 60 65 70 F 1 TWITTER-2015 no-pretrainafter-pretrain 200 400 600 800 1000 1200 1400 1600 the number of samples 45 50 55 60 65 70 F 1 TWITTER-2017 no-pretrainafter-pretrain Figure 3: The effectiveness of pre-training when using different number of training samples for the downstream task.",
"the AOG task.",
"Finally, adding the MSP task significantly boosts the performance, especially on the MASC task.",
"This shows that the MSP task can enhance our model's understanding of sentiment across language and image modalities.",
"By combining all the pre-training tasks, our full model generally achieves the best results over most of the subtasks whether in both full supervision and weak supervision settings.",
"Impact of pre-training when using different number of downstream training samples.",
"To better understand the impact of pre-training, we compare the results with and without pre-training when adopting different number of samples for downstream training.",
"We use the JMASA task as the example to observe the impact.",
"As shown in Fig. 3, when the sample size is small, pre-training can bring a huge improvement.",
"In contrast, when the sample size becomes larger, pre-training brings relatively small improvements.",
"This further illustrates the robustness and the effectiveness of our pre-training approach, especially in low-resource scenarios.",
"To further demonstrate the effectiveness of our approach, we present four test examples with predictions from different methods.",
"The compared methods are BART , our framework using multimodal inputs without pre-training (denoted by MM ), and our framework using multimodal inputs with full pre-training (denoted by VLP ), respectively.",
"As shown in Table 8, for example",
"(a), both BART and MM extracted the wrong aspect term (i.e., the Faithfull Pearl Jam ) and gave the incorrect sentiment prediction towards Eddie .",
"For example",
"(b), BART only extracted one aspect term Madonna while MM identified an additional aspect term Demelza .",
"However, the sentiment towards Madonna was wrongly predicted by MM .",
"For example",
"(c), BART only 2156 Image Text",
"recognized part of the aspect term Colombia and MM wrongly predicted the sentiment towards Miss Colombia as Neutral .",
"For example",
"(d), both BART and MM failed to recognize the aspect term D-League .",
"Among all the cases, our VLP model with full pre-training correctly extracted all the aspect terms and classified the sentiment , which shows the advantage of our generative framework and task-specific pre-training tasks.",
"In this paper, we proposed a task-specific Vision-Language Pre-training framework for Multimodal Aspect-Based Sentiment Analysis (VLP-MABSA).",
"We further designed three kinds of pre-training tasks from the language, vision, and multi-modal modalities, respectively.",
"Experimental results show that our proposed approach generally outperforms the state-of-the-art methods for three subtasks of MABSA.",
"Our work is a first step towards a unified Vision-Language Pre-training framework for MABSA.",
"In the future, we plan to apply our pretraining approach on a larger dataset and consider the relation between image and text in our pretraining framework.",
"We hope this work can potentially bring new insights and perspectives to the research of MABSA.",
"The authors would like to thank the anonymous reviewers for their insightful comments.",
"This work was supported by the Natural Science Foundation of China (62076133 and 62006117), and the Natural Science Foundation of Jiangsu Province for Young Scholars (BK20200463) and Distinguished Young Scholars (BK20200018)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"method",
"result",
"method",
"objective",
"method",
"result",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"objective",
"objective",
"method",
"objective",
"other",
"other"
] |
[
"Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others.",
"In this work, we propose to improve cross-lingual fine-tuning with consistency regularization.",
"Specifically, we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations, i.e., subword sampling, Gaussian noise, code-switch substitution, and machine translation.",
"In addition, we employ model consistency to regularize the models trained with two augmented versions of the same training set.",
"Experimental results on the XTREME benchmark show that our method 1 significantly improves cross-lingual fine-tuning across various tasks, including text classification, question answering, and sequence labeling.",
"Pre-trained cross-lingual language models (Con-neau and Lample, 2019; Conneau et al., 2020a; Chi et al., 2020) have shown great transferability across languages.",
"By fine-tuning on labeled data in a source language, the models can generalize to other target languages, even without any additional training.",
"Such generalization ability reduces the required annotation efforts, which is prohibitively expensive for low-resource languages.",
"Recent work has demonstrated that data augmentation is helpful for cross-lingual transfer, e.g., translating source language training data into target languages (Singh et al., 2019), and generating code-switch data by randomly replacing input words in the source language with translated words in target languages (Qin et al., 2020).",
"By populating the dataset, their fine-tuning still treats training Contribution during internship at Microsoft Research.",
"instances independently, without considering the inherent correlations between the original input and its augmented example.",
"In contrast, we propose to utilize consistency regularization to better leverage data augmentation for cross-lingual fine-tuning.",
"Intuitively, for a semantic-preserving augmentation strategy, the predicted result of the original input should be similar to its augmented one.",
"For example, the classification predictions of an English sentence and its translation tend to remain consistent.",
"In this work, we introduce a cross-lingual fine-tuning method XTUNE that is enhanced by consistency regularization and data augmentation.",
"First, example consistency regularization enforces the model predictions to be more consistent for semantic-preserving augmentations.",
"The regularizer penalizes the model sensitivity to different surface forms of the same example (e.g., texts written in different languages), which implicitly encourages cross-lingual transferability.",
"Second, we introduce model consistency to regularize the models trained with various augmentation strategies.",
"Specifically, given two augmented versions of the same training set, we encourage the models trained on these two datasets to make consistent predictions for the same example.",
"The method enforces the corpus-level consistency between the distributions learned by two models.",
"Under the proposed fine-tuning framework, we study four strategies of data augmentation, i.e., subword sampling (Kudo, 2018), code-switch substitution (Qin et al., 2020), Gaussian noise (Agha-janyan et al., 2020), and machine translation.",
"We evaluate XTUNE on the XTREME benchmark (Hu et al., 2020), including three different tasks on seven datasets.",
"Experimental results show that our method outperforms conventional fine-tuning with data augmentation.",
"We also demonstrate that XTUNE is flexible to be plugged in various tasks, such as classification, span extraction, and sequence labeling.",
"We summarize our contributions as follows: We propose XTUNE , a cross-lingual fine-tuning method to better utilize data augmentations based on consistency regularization.",
"We study four types of data augmentations that can be easily plugged into cross-lingual fine-tuning.",
"We give instructions on how to apply XTUNE to various downstream tasks, such as classification, span extraction, and sequence labeling.",
"We conduct extensive experiments to show that XTUNE consistently improves the performance of cross-lingual fine-tuning.",
"Cross-Lingual Transfer Besides learning cross-lingual word embeddings (Mikolov et al., 2013; Faruqui and Dyer, 2014; Guo et al., 2015; Xu et al., 2018; Wang et al., 2019), most recent work of cross-lingual transfer is based on pre-trained cross-lingual language models (Conneau and Lample, 2019; Conneau et al., 2020a; Chi et al., 2020).",
"These models generate multilingual contextualized word representations for different languages with a shared encoder and show promising cross-lingual transferability.",
"Cross-Lingual Data Augmentation Machine translation has been successfully applied to the cross-lingual scenario as data augmentation.",
"A common way to use machine translation is to fine-tune models on both source language training data and translated data in all target languages.",
"Furthermore, Singh et al. (2019) proposed to replace a segment of source language input text with its translation in another language.",
"However, it is usually impossible to map the labels in source language data into target language translations for token-level tasks.",
"Zhang et al. (2019) used code-mixing to perform the syntactic transfer in cross-lingual dependency parsing.",
"Fei et al. (2020) constructed pseudo translated target corpora from the gold-standard annotations of the source languages for cross-lingual semantic role labeling.",
"Fang et al. (2020) proposed an additional Kullback-Leibler divergence self-teaching loss for model training, based on auto-generated soft pseudo-labels for translated text in the target language.",
"Consistency Regularization One strand of work in consistency regularization focused on regularizing model predictions to be invariant to small perturbations on image data.",
"The small perturbations can be random noise (Zheng et al., 2016), adversarial noise (Miyato et al., 2019; Carmon et al., 2019) and various data augmentation approaches (Hu et al., 2017; Ye et al., 2019; Xie et al., 2020).",
"Similar ideas are used in the natural language processing area.",
"Both adversarial noise (Zhu et al., 2020; Jiang et al., 2020; Liu et al., 2020) and sampled Gaussian noise (Aghajanyan et al., 2020) are adopted to augment input word embeddings.",
"Another strand of work focused on consistency under different model parameters (Tarvainen and Valpola, 2017; Athiwaratkun et al., 2019), which is complementary to the first strand.",
"We focus on the cross-lingual setting, where consistency regularization has not been fully explored.",
"Conventional cross-lingual fine-tuning trains a pretrained language model on the source language and directly evaluates it on other languages, which is also known as the setting of zero-shot cross-lingual fine-tuning.",
"Specifically, given a training corpus D in the source language (typically in English), and a model f ( ; ) that predicts task-specific probability distributions, we define the loss of cross-lingual fine-tuning as: L task ( D , ) = (cid:88) x D (cid:96) ( f ( x ; ) , G ( x )) , where G ( x ) denotes the ground-truth label of example x , (cid:96) ( , ) is the loss function depending on the downstream task.",
"Apart from vanilla cross-lingual fine-tuning on the source language, recent work shows that data augmentation is helpful to improve performance on the target languages.",
"For example, Conneau and Lample (2019) add translated examples to the training set for better cross-lingual transfer.",
"Let A ( ) be a cross-lingual data augmentation strategy (such as code-switch substitution), and DA = D {A ( x ) | x D} be the augmented training corpus, the fine-tuning loss is L task ( DA , ) .",
"Notice that it is non-trivial to apply some augmentations for token-level tasks directly.",
"For instance, in part-of-speech Figure 1: Overview of our two-stage fine-tuning algorithm.",
"tagging, the labels of source language examples can not be mapped to the translated examples because of the lack of explicit alignments.",
"We propose to improve cross-lingual fine-tuning with two consistency regularization methods, so that we can effectively leverage cross-lingual data augmentations.",
"In order to encourage consistent predictions for an example and its semantically equivalent augmentation, we introduce example consistency regularization, which is defined as follows:",
"R 1 ( D , , A ) = (cid:88) x D KLS ( f ( x ; ) (cid:107) f ( A ( x ); )) , KLS ( P, Q ) = KL (stopgrad( P ) (cid:107) Q )+ KL (stopgrad( Q ) (cid:107) P )",
"where KLS ( ) is the symmertrical Kullback-Leibler divergence.",
"The regularizer encourages the predicted distributions f ( x ; ) and f ( A ( x ); ) to agree with each other.",
"The stopgrad( ) operation 2 is used to stop back-propagating gradients, which is also employed in (Jiang et al., 2020; Liu et al., 2020).",
"The ablation studies in Section 4.2 empirically show that the operation improves fine-tuning performance.",
"2 Implemented by",
".detach() in PyTorch.",
"While the example consistency regularization is conducted at the example level, we propose the model consistency to further regularize the model training at the corpus level.",
"The regularization is conducted at two stages.",
"First, we obtain a fine-tuned model on the training corpus D : = arg min 1 L task ( D , 1 ) .",
"In the second stage, we keep the parameters fixed.",
"The regularization term is defined as: R 2 ( DA , , ) = (cid:88) x D AKL ( f ( x ; ) (cid:107) f ( x ; )) where DA is the augmented training corpus, and KL ( ) is Kullback-Leibler divergence.",
"For each example x of the augmented training corpus DA , the model consistency regularization encourages the prediction f ( x ; ) to be consistent with f ( x ; ) .",
"The regularizer enforces the corpus-level consistency between the distributions learned by two models.",
"An unobvious advantage of model consistency regularization is the flexibility with respect to data augmentation strategies.",
"For the example of part-of-speech tagging, even though the labels can not be directly projected from an English sentence to its translation, we are still able to employ the regularizer.",
"Because the term R 2 is put on the same example x DA , we can always align the token-level predictions of the models and .",
"As shown in Figure 1, we combine example consistency regularization R 1 and model consistency regularization R 2 as a two-stage fine-tuning process.",
"Formally, we fine-tune a model with R 1 in the first stage: = arg min 1 L task ( D , 1 ) + R 1 ( D , 1 , A ) where the parameters are kept fixed for R 2 in the second stage.",
"LXTUNE = L task ( DA , ) + 1 R 1 ( DA , , A (cid:48) ) + 2 R 2 ( DA , , )",
"where 1 and 2 are the corresponding weights of two regularization methods.",
"Notice that the data augmentation strategies A , A (cid:48) , and A can be either different or the same, which are tuned as hyper-parameters.",
"We consider four types of data augmentation strategies in this work, which are shown in Figure 2.",
"We aim to study the impact of different data augmentation strategies on cross-lingual transferability.",
"Representing a sentence in different subword sequences can be viewed as a data augmentation strategy (Kudo, 2018; Provilkov et al., 2020).",
"We utilize XLM-R (Conneau et al., 2020a) as our pre-trained cross-lingual language model, while it applies subword tokenization directly on raw text data using SentencePiece (Kudo and Richardson, 2018) with a unigram language model (Kudo, 2018).",
"As one of our data augmentation strategies, we apply the on-the-fly subword sampling algorithm in the unigram language model to generate multiple subword sequences.",
"Most data augmentation strategies in NLP change input text discretely, while we directly add random perturbation noise sampled from Gaussian distribution on the input embedding layer to conduct data augmentation.",
"When combining this data augmentation with example consistency R 1 , the method is similar to the stability training (Zheng et al., 2016), random perturbation training (Miyato et al., 2019) and the R3F method (Aghajanyan et al., 2020).",
"We also explore Gaussian noise's capability to generate new examples on continuous input space for conventional fine-tuning.",
"Anchor points have been shown useful to improve cross-lingual transferability.",
"Conneau et al. (2020b) analyzed the impact of anchor points in pre-training cross-lingual language models.",
"Following Qin et al. (2020), we generate code-switch data in multiple languages as data augmentation.",
"We randomly select words in the original text in the source language and replace them with target language words in the bilingual dictionaries to obtain code-switch data.",
"Intuitively, this type of data augmentation explicitly helps pre-trained cross-lingual models align the multilingual vector space by the replaced anchor points.",
"Machine translation has been proved to be an effective data augmentation strategy (Singh et al., 2019) under the cross-lingual scenario.",
"However, the ground-truth labels of translated data can be unavailable for token-level tasks (see Section 3), which disables conventional fine-tuning on the augmented data.",
"Meanwhile, our proposed model consistency R 2 can not only serve as consistency regularization but also can be viewed as a self-training objective to enable semi-supervised training on the unlabeled target language translations.",
"We give instructions on how to apply XTUNE to various downstream tasks, i.e., classification, span extraction, and sequence labeling.",
"By default, we use model consistency R 2 in full XTUNE .",
"We describe the usage of example consistency R 1 as follows.",
"For classification task, the model is expected to predict one distribution per example on n label types, i.e., model f ( ; ) should predict a probability distribution p cls R n label .",
"Thus we can directly use example consistency R 1 to regularize the consistency of the two distributions for all four types of our data augmentation strategies.",
"For span extraction task, the model is expected to predict two distributions per example p start , p end R n subword , indicating the probability distribution of where the answer span starts and ends, n subword denotes the length of the tokenized input text.",
"For Gaussian noise, the subword sequence remains unchanged so that example consistency R 1 can be directly applied to the two distributions.",
"Since subword sampling and code-switch substitution will change n subword , we control the ratio of words to be modified and utilize example consistency R 1 on unchanged positions only.",
"We do not use the example consistency R 1 for machine translation because it is impossible to explicitly align the two distributions.",
"Recent pre-trained language models generate representations at the subword-level.",
"For sequence labeling tasks, these models predict label distributions on each word's first subword.",
"Therefore, the model is expected to predict n word probability distributions per example on n label types.",
"Unlike span extraction, subword sampling, code-switch substitution, and Gaussian noise do not change n word .",
"Thus the three data augmentation strategies will not affect the usage of example consistency R 1 .",
"Although word alignment is a possible solution to map the predicted label distributions between translation pairs, the word alignment process will introduce more noise.",
"Therefore, we do not employ machine translation as data augmentation for the example consistency R 1 .",
"Datasets For our experiments, we select three types of cross-lingual understanding tasks from XTREME benchmark (Hu et al., 2020), including two classification datasets: XNLI (Conneau et al., 2018), PAWS-X (Yang et al., 2019), three span extraction datasets: XQuAD (Artetxe et al., 2020), MLQA (Lewis et al., 2020), TyDiQA-GoldP (Clark et al., 2020), and two sequence labeling datasets: NER (Pan et al., 2017), POS (Nivre et al., 2018).",
"The statistics of the datasets are shown in the supplementary document.",
"Fine-Tuning Settings We consider two typical fine-tuning settings from Conneau et al. (2020a) and Hu et al. (2020) in our experiments, which are (1) cross-lingual transfer : the models are fine-tuned on English training data without translation available, and directly evaluated on different target languages; (2) translate-train-all : translation-based augmentation is available, and the models are fine-tuned on the concatenation of English training data and its translated data on all target languages.",
"Since the official XTREME repository 3 does not provide translated target language data for POS and NER, we use Google Translate to obtain translations for these two datasets.",
"Implementation Details We utilize XLM-R (Conneau et al., 2020a) as our pre-trained cross-lingual language model.",
"The bilingual dictionaries we used for code-switch substitution are from MUSE (Lample et al., 2018).",
"4 For languages that cannot be found in MUSE, we ignore these languages since other bilingual dictionaries might be of poorer quality.",
"For the POS dataset, we use the average-pooling strategy on subwords to obtain word representation since part-of-speech is related to different parts of words, depending on the language.",
"We tune the hyper-parameter and select the model with the best average results over all the languages' development set.",
"There are two datasets without development set in multi-languages.",
"For XQuAD, we tune the hyper-parameters with the development set of MLQA since they share the same training set and have a higher degree of overlap in languages.",
"For TyDiQA-GoldP, we use the English test set 3 github.com/google-research/xtreme 4 github.com/facebookresearch/MUSE Model Pair Sentence Structure Prediction Question Answering XNLI PAWS-X POS NER XQuAD MLQA TyDiQA Metrics Acc.",
"as the development set.",
"In order to make a fair comparison, the ratio of data augmentation in DA is all set to 1.0.",
"The detailed hyper-parameters are shown in the supplementary document.",
"Table 1 shows our results on XTREME.",
"For the cross-lingual transfer setting, we outperform previous works on all seven cross-lingual language understanding datasets.",
"5 Compared to XLM-R large baseline, we achieve an absolute 4.9-point improvement (70.0 vs. 74.9) on average over seven datasets.",
"For the translate-train-all setting, we achieved state-of-the-art results on six of the seven datasets.",
"Com-5 X-STILTs (Phang et al., 2020) uses additional SQuAD v1.1 English training data for the TyDiQA-GoldP dataset, while we prefer a cleaner setting here.",
"pared to FILTER, 6 we achieve an absolute 2.1-point improvement (74.4 vs. 76.5), and we do not need English translations during inference.",
"Table 2 shows how the two regularization methods affect the model performance separately.",
"For the cross-lingual transfer setting, XTUNE achieves an absolute 2.8-point improvement compared to our implemented XLM-R base baseline.",
"Meanwhile, fine-tuning with only example consistency R 1 and model consistency R 2 degrades the averaged results by 0.4 and 1.0 points, respectively.",
"For the translate-train-all setting, our proposed model consistency R 2 enables training on POS and NER even if labels of target language translations 6 FILTER directly selects the best model on the test set of XQuAD and TyDiQA-GoldP.",
"Under this setting, we can obtain 83.1/69.7 for XQuAD, 75.5/61.1 for TyDiQA-GoldP.",
"are unavailable in these two datasets.",
"To make a fair comparison in the translate-train-all setting, we augment the English training corpus with target language translations when fine-tuning with only example consistency R 1 .",
"Otherwise, we only use the English training corpus in the first stage, as shown in Figure",
"1(a).",
"Compared to XTUNE , the performance drop on two classification datasets under this setting is relatively small since R 1 can be directly applied between translation-pairs in any languages.",
"However, the performance is significantly degraded in three question answering datasets, where we can not align the predicted distributions between translation-pairs in R 1 .",
"We use subword sampling as the data augmentation strategy in R 1 for this situation.",
"Fine-tuning with only model consistency R 2 degrades the overall performance by 1.1 points.",
"These results demonstrate that the two consistency regularization methods complement each other.",
"Be-Model Tatoeba BUCCXLM-R base ( cross-lingual transfer ) 74.2 78.2 XLM-R base ( translate-train-all ) 79.7 79.7 XTUNE ( translate-train-all ) 82.3 82.2 with only example consistency R 1 82.0 82.1 with only model consistency R 2 79.5 79.0 Table 5: Results of cross-lingual retrieval with the models fine-tuned on XNLI.",
"sides, we observe that removing stopgrad degrades the overall performance by 0.5 points.",
"Table 3 provides results of each language on the XNLI dataset.",
"For the cross-lingual transfer setting, we utilize code-switch substitution as data augmentation for both example consistency R 1 and model consistency R 2 .",
"We utilize all the bilingual dictionaries, except for English to Swahili and English to Urdu, which MUSE does not provide.",
"Results show that our method outperforms all baselines on each language, even on Swahili (+2.2 points) and Urdu (+5.4 points), indicating our method can be generalized to low-resource languages even without corresponding machine translation systems or bilingual dictionaries.",
"For translate-train-all setting, we utilize machine translation as data augmentation for both example consistency R 1 and model consistency R 2 .",
"We improve the XLM-R large baseline by +2.2 points on average, while we still have +0.9 points on average compared to FILTER.",
"It is worth mentioning that we do not need corresponding English translations during inference.",
"Complete results on other datasets are provided in the supplementary document.",
"It is better to employ data augmentation for consistency regularization than for conventional fine-tuning.",
"As shown in Table 4,",
"com-(a) cross-lingual transfer",
"pared to employing data augmentation for conventional fine-tuning (Data Aug.), our regularization methods ( XTUNER 1 , XTUNER 2 ) consistently improve the model performance under all four data augmentation strategies.",
"Since there is no labeled data on translations in POS and the issue of distribution alignment in example consistency R 1 , when machine translation is utilized as data augmentation, the results for Data Aug. and XTUNER 1 in POS, as well as XTUNER 1 in MLQA, are unavailable.",
"We observe that Data Aug. can enhance the overall performance for coarse-grained tasks like XNLI, while our methods can further improve the results.",
"However, Data Aug. even causes the performance to degrade for fine-grained tasks like MLQA and POS.",
"In contrast, our proposed two consistency regularization methods improve the performance by a large margin (e.g., for MLQA under code-switch data augmentation, Data Aug. decreases baseline by 1.2 points, while XTUNER 1 increases baseline by 2.6 points).",
"We give detailed instructions on how to choose data augmentation strategies for XTUNE in the supplementary document.",
"XTUNE improves cross-lingual retrieval.",
"We fine-tune the models on XNLI with different settings and compare their performance on two cross-lingual retrieval datasets.",
"Following Chi et al. (2020) and Hu et al. (2020), we utilize representations averaged with hidden-states on the layer 8 of XLM-R base .",
"As shown in Table 5, we observe significant improvement from the translate-train-all baseline to fine-tuning with only example consistency R 1 , this suggests regularizing the task-specific output of translation-pairs to be consistent also encourages the model to generate language-invariant representations.",
"XTUNE only slightly improves upon this setting, indicating R 1 between translation-pairs is the most important factor to improve cross-lingual retrieval task.",
"as the ability to generate language-invariant representations.",
"As shown in Figure 3, we present t-SNE visualization of examples from the XNLI development set under three different settings.",
"We observe the model fine-tuned with XTUNE significantly improves the decision boundaries of different labels.",
"Besides, for an English example and its translations in other languages, the model fine-tuned with XTUNE generates more similar representations compared to the two baseline models.",
"This observation is also consistent with the cross-lingual retrieval results in Table 5.",
"In this work, we present a cross-lingual fine-tuning framework XTUNE to make better use of data augmentation.",
"We propose two consistency regularization methods that encourage the model to make consistent predictions for an example and its semantically equivalent data augmentation.",
"We explore four types of cross-lingual data augmentation strategies.",
"We show that both example and model consistency regularization considerably boost the performance compared to directly fine-tuning on data augmentations.",
"Meanwhile, model consistency regularization enables semi-supervised training on the unlabeled target language translations.",
"XTUNE combines the two regularization methods, and the experiments show that it can improve the performance by a large margin on the XTREME benchmark.",
"Wanxiang Che is the corresponding author.",
"This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153."
] | [
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"method",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"Although Question-Answering has long been of research interest, its accessibility to users through a speech interface and its support to multiple languages have not been addressed in prior studies.",
"Towards these ends, we present a new task and a synthetically-generated dataset to do Fact-based Visual Spoken-Question Answering (FVSQA).",
"FVSQA is based on the FVQA dataset, which requires a system to retrieve an entity from Knowledge Graphs (KGs) to answer a question about an image.",
"In FVSQA, the question is spoken rather than typed.",
"Three sub-tasks are proposed: (1) speech-to-text based, (2) end-to-end, without speech-to-text as an intermediate component, and (3) cross-lingual, in which the question is spoken in a language different from that in which the KG is recorded.",
"The end-to-end and cross-lingual tasks are the first to require world knowledge from a multi-relational KG as a differentiable layer in an end-to-end spoken language understanding task, hence the proposed reference implementation is called Worldly-Wise (WoW).",
"WoW is shown to perform end-to-end cross-lingual FVSQA at same levels of accuracy across 3 languages English, Hindi, and Turkish.",
"Imagine being able to ask your voice assistant a question in any language, to learn some trivia about your favorite movie star.",
"This task falls in the realm of Knowledge-based Question Answering (QA).",
"One such challenging QA task is that of Fact-based Visual Question Answering (FVQA) (Wang et al., 2018) which seeks to imitate how humans leverage background common-sense knowledge when answering visual questions.",
"This task ensures that answering each question about an image requires external knowledge not directly available within the image or the text of the question.",
"(see Fig. 1).",
"The external information is provided in the form of knowledge graphs, which are multi-relational Figure 1: Example of a fact-based visual question Question Which object in this image can be found in a Jazz Club?",
"Supporting fact You are likely to find [[a trumpet]] in [[a jazz club]] Subject, Predicate, Object (Trumpet, AtLocation, Jazz Club) Answer Trumpet graphs, storing relational representations between entities.",
"The entities could be single words or phrases of words that denote objects or concepts.",
"Such tasks, though widely studied, exist mostly for well-resourced languages (Goyal et al., 2017; Wang et al., 2018).",
"These languages generally also have mature Automatic Speech Recognition (ASR) systems and language models.",
"The accompanying Knowledge Graphs (KGs) also tend to be limited to languages that are well-resourced (Auer et al., 2007; Tandon et al., 2014; Liu and Singh, 2004).",
"Against this background, it is worthwhile to think of building end-to-end systems which directly use speech signals as input, that can readily harness huge knowledge repositories stored in another language, instead of requiring Tabula Rasa learning.",
"Answering (FVSQA) along with the release of 5 hours of synthetic-speech data in each of the three languages English, Hindi, and Turkish.",
"2) An end-to-end architecture Worldly-Wise (WoW) capable of answering questions trained directly on speech features in all three languages.",
"To the best of our knowledge, this is the first work to perform KG knowledge acquisition using only a speech signal as input, without the requirement for a pre-trained automatic speech recognizer as a system component.",
"Worldly-Wise (WoW) is readily generalizable to other languages, even those without an ASR-system.",
"This is possible because of two reasons -",
"a) it obtains speech features as Mel-Frequency Cepstral Coefficients and does not require ASR-based text-conversion or speech feature extraction from a language-specific pretrained network, and",
"b) for knowledge acquisition, it does not require the entity label to be in the same language as the question, instead leveraging neuro-symbolic entity representations in the form of KG embeddings.",
"These KG embedding methods, trained to remedy KG sparsity by performing missing-edge prediction, learn transferable entity-features that encode the local and global structures in KGs.",
"This also permits the architecture to use an image representation technique called Image-as-Knowledge' (IaK).",
"This uses a co-attention mechanism that attends to important entities in the image and time-steps in a question, thus allowing for improved answer retrieval.",
"The IaK technique was first presented by (Ramnath and Hasegawa-Johnson, 2020) for the goal of performing FVQA over incomplete KGs, but is applied to a speech signal as opposed to a textual question.",
"We revisit its important details below in the relevant sections.",
"We report experimental results on synthetic speech data in the aforementioned diverse languages to demonstrate its effectiveness.",
"Hindi and Turkish are simulated as under-resourced languages by denying the system access to any text, ASR, or machine translation to or from those languages, thereby requiring the system to learn the mapping from Hindi and Turkish speech signals to the KG knowledge stored in English.",
"Through this work, we hope to motivate research in expanding spoken language understanding (SLU) in under-resourced languages through models which circumvent the need for parallel text labelled resources.",
"Spoken language understanding (SLU) has a long history.",
"It is well established in speech literature that using speech audio features in an end-to-end fashion for Language Understanding tasks is nontrivial compared to text.",
"There are several diffi-culties in using speech directly as input such as long length of inputs making it difficult to densely capture context, presence of spoken accents, gender, environmental noise, and acoustic information, etc. which all pose challenges for use in end-to-end semantic reasoning on it.",
"For most of its history, SLU was developed in a pipelined fashion, with ASR feeding text to a natural language understanding system, e.g., to the best of our knowledge, the only published uses of SLU with knowledge graphs that fit this description is (Woods, 1975).",
"Recent research in end-to-end multimodal SLU bypasses the need for ASR by leveraging a parallel modality such as image (Har-wath et al., 2016; Kamper et al., 2019) or video (Sanabria et al., 2018), or a non-parallel corpus of text (Sar et al., 2020), to guide learning speech embeddings such that the speech input can be used in a downstream task.",
"In speech-based VQA applications, the most common approach is a two-step approach which consists of an ASR followed by text-based VQA (Zhang et al., 2017).",
"However, these systems are not generalizable to under-resourced or unwritten languages for which we cannot train an ASR system.",
"Therefore, in this study, we will explore using neural speech embeddings, which are guided by the information in the KG, for achieving FVSQA.",
"Knowledge graphs (Suchanek et al., 2007; Auer et al., 2007; Bollacker et al., 2008) are effective ways of representing objects or concepts and their inter-relationships.",
"Such relational representations are formally defined in the Resource Description Framework (RDF) as triples f = ( subject, predicate, object ) , where ( subject, object ) are entities, predicate is the relation connecting the two entities.",
"(Halford et al., 2010) showed that such linked representations correlate highly with human cognition.",
"Furthermore, KGs can be classified as Closed-World or Open-World.",
"The former assumes that non-existent fact triples must necessarily be false, while the latter assumes that the KG could be incomplete, and therefore missing edges could be either true or false.",
"While closed-world assumptions hold for domain-specific KGs, common-sense KGs extracted from web-scale datasets do not respect this assumption (Galrraga et al., 2013; Dong et al., 2014).",
"Common-sense KGs extracted from web-scale datasets are usually incomplete.",
"KG embedding techniques (Bordes et al., 2013; Sun et al., 2019; Socher et al., 2013; Nickel et al., 2011; Dong et al., 2014; Dettmers et al., 2018) have been studied as a means to remedy incompleteness of large-scale KGs.",
"These embeddings have been shown to transfer well to other tasks that require knowledge acquisition over the KGs.",
"KG Embedding methods usually assign scores or truth-probabilities to each fact triple by learning latent features for entities and relationships.",
"These methods learn a score mapping ( h, r, t ) : E R E R where E is the set of all entities, R is the set of all relation-types.",
"h, t E are the head (subject) and tail (object), r R is the directed relationship that connects the two.",
"The observed KG can be expressed as G E R E , which in turn is a subset of G o , the unknown set of all true edges in the world that the KG seeks to represent.",
"The embeddings ( h, r, t ) are learned so that the score ( . ) is high for edges not just in G but also for those in G o , and low for edges outside of it.",
"Distance-based models (Bordes et al., 2013; Sun et al., 2019; Trouillon et al., 2016; Bordes et al., 2011) learn embeddings h , r and t in order to minimize the distance between t and f ( h, r ) , for some projection function f ( ) .",
"Common-sense KGs are often based on free text, therefore most entities occur rarely; an example is the entity lying on in Fig. 2.",
"Since it is very challenging for distance-based methods to perform completion of commonsense KGs, very few previous benchmarks have approached this task (Li et al., 2016; Malaviya et al., 2020).",
"In (Ramnath and Hasegawa-Johnson, 2020), it was shown that Entity-Relation Multi-Layer Per-ceptron (ERMLP) (Dong et al., 2014), which uses an MLP to produce the score ( h, r, t ) for each fact triple, works better for FVQA in comparison to TransE and RotatE.",
"Knowledge-graph question answering (KGQA) is the task of answering questions regarding facts",
"4 Task Formulation This section introduces a new task called FVSQA and presents a new dataset collected for this task.",
"that can be inferred/retrieved from a KG given the question, image and the graph.",
"Language-only benchmarks include (Bordes et al., 2015; Berant et al., 2013), vision-and-language benchmarks include (Sanket Shah and Talukdar, 2019; Marino et al., 2019; Wang et al., 2018).",
"In (Wang et al., 2018), FVQA is approached as a parsing and fact-retrieval problem, while (Narasimhan and Schwing, 2018) directly retrieves facts using lexical-semantic word embeddings.",
"In Out-of-the-box (OOB) reasoning (Narasimhan et al., 2018), a Graph Convolutional Network (Kipf and Welling, 2017) is used to reason about the correct entity, while (Zhu et al., 2020) (the current State-of-the-Art in the complete-KG FVQA task) added a visual scene-graph (Krishna et al., 2016) and a semantic graph based on the question alongside the (OOB) KG reasoning module.",
"In (Ramnath and Hasegawa-Johnson, 2020), FVQA is tackled on incomplete KGs using KG embeddings to represent entities instead of word-embeddings, as the latter are shown to be inadequate for this task.",
"Among other KGQA works closely related to our approach, (Huang et al., 2019) answer a text question using minimum-distance retrieval of translational KG entity and relation embeddings, thereby achieving SOTA results on SimpleQuestions with supporting knowledge bases Freebase2M and Free-base5M (Bollacker et al., 2008).",
"In (Lukovnikov et al., 2017), authors use character-level embeddings for SimpleQuestions.",
"In (Saxena et al., 2020), KG Embedding-based reasoning over missing edges is performed on the text-only benchmarks Webquestions (Berant et al., 2013) and MetaQA (Zhang et al., 2018), where they also perform multi-hop reasoning.",
"Amongst KGQA baselines involving the visual modality, the OKVQA benchmark (Marino et al., 2019) provides outside common-sense knowledge in the form of supporting text.",
"The accompanying external knowledge is acquired using a neural network parse of the fact text.",
"KVQA (Sanket Shah and Talukdar, 2019) provided KGs as outside knowledge, and they tackled the task using face-recognition and entity-linking to answer several different types of questions.",
"FVSQA is similar to FVQA in all aspects but for the modality of the question q ; in FVSQA it is speech input instead of a text input.",
"The following condition holds for questions in the FVQA (Wang et al., 2018) benchmark: for each (question,image,answer) triplet in the dataset ( ( q i , I i , y i ) D ), exactly one supporting fact in the knowledge graph ( f j = ( h, r, t ) G ) exists such that the correct answer y i is either the head or the tail of f j , and such that at least one of the two entities is visible in the image.",
"The companion knowledge-graph is constructed from three diverse sources: ConceptNet (Liu and Singh, 2004), Webchild (Tandon et al., 2014), and DBPedia (Auer et al., 2007).",
"ConceptNet provides common-sense knowledge about entities, DBPedia mainly conveys hypernym (i.e. parent-child) relationships, while Webchild covers many different kinds of comparative relationships between entities (these are considered as a single relationship-type for FVQA).",
"Answering questions in FVQA is to perform the following operation y = argmax e E p ( y = e | q, I, G ) , (1) i.e., retrieving that entity which is most likely to be Knowledge Base Total facts Questions DBPedia 35152 817 ConceptNet 119721 4652 Webchild 38576 357 Table 1: Distribution of facts and questions across the KBs (Wang et al., 2018) the correct answer given a question q and image I , and given the graph G .",
"The FVSQA task formulation is identical, except that the question is not textual but spoken.",
"We study the task when the question is spoken in one of three languages English, Hindi, Turkish.",
"The dataset contains 2190 images sampled from the ILSVRC (Russakovsky et al., 2015) and the MSCOCO (Lin et al., 2014) datasets.",
"5826 questions were obtained via crowdsourcing on Amazon Mechanical Turk which concern 4216 unique supporting facts (Table 1).",
"FVSQA provides the same five train-test splits as FVQA, where each split contains images and questions roughly in the ratio 1:1.",
"The accompanying KG consists of roughly 194500 facts, about 88606 entities.",
"In total, the dataset contains 13 relations: R {Category, HasProp-erty, RelatedTo, AtLocation, IsA, HasA, CapableOf, Figure 3: A co-attention mechanism fuses the image and question representations.",
"UsedFor, Desires, PartOf, ReceivesAction, Creat-edBy, Comparative} .",
"The text questions in FVSQA dataset are in English.",
"To generate spoken questions in Hindi and Turkish, we first translate the questions using Amazon Translate API 1 from English.",
"We manually review the questions to ensure intelligibility of questions.",
"These translated texts are only used for speech data generation; these are not available to the network during either training or inference.",
"We use Amazon's Polly API 2 to generate spoken questions for each language.",
"The generated speech is in mp3 format, sampled at 22 kHz.",
"For a given language, all questions were generated using the same voice.",
"The voices used were Joanna for English, Aditi for Hindi, and Filiz for Turkish.",
"We again manually review and ensure intelligibility of speech data so generated.",
"Fig. 2 depicts the architecture we use for FVSQA.",
"As shown in the figure, co-attention fuses an image I and question q to form a query vector .",
"This query vector is then used to retrieve the answer from the KG as y ( q | I ) = argmax e E ( q, I ) T e.",
"The following sections address representations of the question, KG, and image, the information fusion function ( q, I ) , and the loss function.",
"The image and KG representations are identical to those considered in (Ramnath and Hasegawa-Johnson, 2020), however, their goal is different from ours, as they perform monolingual text-FVQA over incomplete KGs.",
"We represent the speech waveforms using Mel-Frequency Cepstral Coefficient features.",
"We set the window-length to 25 ms and stride-size of 10 ms. For each time-step, we follow standard convention of using 39-dimensional vectors the first 12 cepstral coefficients and the energy term, along with delta and double-delta features to gather contextual information as well.",
"To discriminate between a true and false fact, a binary classification-based KG Embedding model is used.",
"Training a meaningful classifier would require presenting it with both positive and negative examples, but the observed KG G has only positive samples.",
"This leads us to a chicken and egg' problem KG Embeddings are supposed to mitigate the very problem of incompleteness, yet they need some negative edges to actually learn a good score function.",
"Some heuristics have been empirically found to work well in overcoming this problem.",
"Under the Locally Closed World Assumption (LCWA) (Dong et al., 2014), negative samples can be generated by randomly corrupting the tail entity of existing facts.",
"The KG embedding loss function penalizes the network when a true edge has a low truth-probability, and a false edge has a high truth-probability.",
"But some false facts may be more difficult for the model to classify as false than the others.",
"(Sun et al., 2019) introduced a self-adversarial negative sampling strategy so that the loss function reflects this, and each false fact's contribution to the loss is scaled by the truth-probability assigned by the network during training.",
"Thus, false edges with a higher truth-probability are penalized more heavily than false edges with lower truth-probabilities.",
"Based on each true fact f i , a total of n adversarial facts are generated and used to train discriminative embeddings using noise contrastive estimation (Gutmann and Hyvrinen, 2010).",
"Thus the knowledge graph embedding loss LKGE in-Module No.",
"cludes the arithmetic inverse of the sum of the log probability that each observed edge is true ( ln ( ( f i )) ), plus the expected log probability that the adversarial edges are false (cid:16) ln ( ( f (cid:48) j )) = ln (cid:16) 1 ( ( f (cid:48) j )) (cid:17)(cid:17) :",
"where expectation is with respect to the probability p i ( f (cid:48) j ) .",
"This probability is tuned using a temperature hyperparameter as p i ( f (cid:48) j ) = exp( ( f (cid:48) j )) n (cid:80) k =1 exp( ( f (cid:48) k )) .",
"Eq.",
"(3) is used to train embeddings of the head ( h ) and tail ( t ), which are applied to the FVSQA task as described in the next several subsections.",
"Eq.",
"(3) also trains relation embeddings ( r ) and MLP weights for the ERMLP scoring function ( w MLP ); these quantities are not used for the downstream FVSQA task.",
"We revisit the IaK representation first described by (Ramnath and Hasegawa-Johnson, 2020).",
"For the FVQA task, (Narasimhan and Schwing, 2018) established the importance of representing images as a bag-of-visual concepts instead of using features from pretrained networks.",
"This is a simple one-hot encoding of all object and scene detections found in the image.",
"IaK instead represents each image as a contextually-weighted sum of KG entity vectors of detected visual concepts.",
"(Ramnath and Hasegawa-Johnson, 2020) showed its superior performance for text-FVQA.",
"Detecting Objects: We use Torchvision's COCO object-detector to detect the 80 COCO (Lin et al., 2014) object classes.",
"The detector used was a Faster RCNN network (Ren et al., 2015) with a ResNet50 backbone (He et al., 2016), and feature pyramid network (Lin et al., 2017).",
"Another detector (ZFTurbo, 2018) trained on OpenImages 600 classes detections was used; we then retain only those classes which are present in ImageNet 200 object detection classes as well as in (Wu et al., 2016).",
"The overlap obtained is almost exact; fewer than 10 classes were not found.",
"Detecting Scenes: A WideResNet (Zagoruyko and Komodakis, 2016) detector trained on the MIT365 places dataset (Zhou et al., 2017) detects the scenes depicted in each image.",
"Only those classes which were used for constructing the FVQA KG (i.e. the 205 classes from MIT205 places dataset) are retained.",
"Upon detecting objects and scenes in each image, their corresponding entity KG embeddings are retrieved from KG.",
"IaK then represents each image as a concatenation of entity embedding vectors.",
"As shown in Fig. 3, a co-attention mechanism fuses the image and question representations.",
"To compute a contextual-query for the image-attention, we first obtain a self-attention weighted question representation A ( q i ) as: A ( q i ) = | q i | (cid:88) t =1 tq q ti , tq = exp( w T q q ti ) | q i | (cid:80) t =1 exp( w T q q ti ) , (5) where tq , w q are respectively the attention paid to time-step w t , and the weight parameters of the attention network used to compute the attention-scores.",
"where jI , w I , e ji are respectively the attention paid to concept j in the image, the weight parameters of the image-attention network, and the j th constituent concept of the image.",
"A ( I i ) represents a mapping: RN e m RN e , which is the attention-weighted convex combination of its inputs, thus A ( I i ) is a vector drawn from the span of the entities present in the image.",
"A ( q i ) represents a mapping: R 39 T R 39 , T being the length of the spoken question signal.",
"where h ( ) is a two-layer fully-connected network with ReLU activation functions.",
"As prescribed in STTF (Narasimhan and Schwing, 2018), late fusion is used wherein both the question and image vectors are separately passed through one fully-connected layer before being concatenated.",
"The loss function in Eq.",
"8 mirrors the answer prediction mechanism, in that the network is penalized whenever the cosine-similarity between the produced query and ground-truth answer deviates from 1.",
"Apart from the MFCC feature generation, the rest of the experimental setup is similar to that described in Seeing-is-Knowing (Ramnath and Hasegawa-Johnson, 2020).",
"It is briefly recapped in the sections below.",
"For training KG Embeddings, the entire KG is split as 80% training set and 20% test set.",
"The embedding dimensions for both entity and relation embeddings are N e = N r = 300 .",
"The batch size used is 1000.",
"ERMLP is trained for 25,000 epochs.",
"Adam optimizer is used for which the learning rate was initialized as 0.01 and then it is scaled down by a factor of 0.1 after every 10,000 epochs.",
"The hyper-parameter search for the learning rate was performed by choosing among values in the set {0.0001, 0.001, 0.01, 0.1}.",
"The temperature hyperparameter for the self-adversarial probability parameterization is set to 1 for all experiments.",
"The number of adversarial samples n generated for each positive sample is 16 .",
"ERMLP is parameterized as a three-layer neural network.",
"The size of the first layer is 3 N e since it takes the concatenated head, relation, and tail embeddings as input.",
"Subsequent layers are 2 N e and N e in size respectively, which are finally capped by a single sigmoid unit to output the truth probability ( h, r, t ) .",
"The activation functions used by the hidden layers are the Rectified Linear Unit (ReLU), which outputs max { 0 , x } for an input x .",
"All layers are fully connected and none of them use dropout.",
"The KG Embeddings accuracy is measured using the standard metrics: Hits @1, Hits @3, Hits @10.",
"These determine how often each correct tail/head gets ranked in the top 1, 3, or 10 ranked facts for Question 1: Which object is used for banging out rhythms in this image?",
"each ground-truth ( h, r ) / ( r, t ) pair.",
"Mean Rank is a metric often used to gauge the performance of KG Embeddings.",
"It measures the mean rank of each true fact f i := ( h, r, t ) in the dataset when ranked by its truth-probability for a given ( h, r ) pair.",
"An allied metric is the Mean Reciprocal Rank = 1 |D| i 1 R i .",
"A maximum of m = 14 visual concepts are detected in each image.",
"We report Hits @1 and Hits @3 for each model.",
"All the results are based on performing K-fold cross validation across the five train-test splits; the numbers reported are mean and standard deviation.",
"To train the fusion function , the optimizer used is Stochastic Gradient Descent with a batch size of 64.",
"The training runs for 100 epochs with a learning rate of 0.01 and a weight de-cay of 1e-3.",
"Fully-connected layers use a dropout probability of 0.3.",
"provided by Google Colab.",
"The training for the ERMLP takes approximately 3 hours, while training ( q, I ) on one train split takes roughly 2 hours.",
"Aided by ERMLP, WoW is able to perform FVSQA at the same levels of accuracy across English, Hindi, and Turkish.",
"FVSQA is trained using the best performing KG embedding model demonstrated in (Ramnath and Hasegawa-Johnson, 2020) and its performance is highlighted in Table 3.",
"To verify the superiority of ERMLP over word-embeddings, we compare a model trained with KG entities represented as averaged word embeddings instead.",
"This representation fails to train an end-to-end system even for English, the final accuracy being close to 0%.",
"For English, we additionally investigate an ASR + Text-based system, where the FVQA model is trained on gold-standard textual questions, and during inference-time, an ASR-converted speech transcript of the question is provided.",
"The ASR system is based on the pre-trained Kaldi ASpIRE model 3 which was originally trained on augmented Fisher English dataset.",
"The resulting FVQA system performs better than an end-to-end system for English.",
"This indicates some joint-training strategies for speech and text-based systems could help increase accuracy for the end-to-end speech system.",
"However, our experiments on sharing the lower layers of the network between speech and text-systems did not improve accuracy of the end-to-end speech system for English.",
"We can see in Q.1, Fig. 4 that for each language, the speech signal can perform as a good query vector to calculate contextual visual attention as per",
"Eq.(5).",
"The resulting IaK attention maps are interpretable, and in cases where the network predicts the wrong answer, provide an insight into the reason for the network's failure as in Q.2.",
"Furthermore, the speech self-attention maps are also coherent and informative.",
"The alignment of time-steps in the speech signal with boundaries is generated alongside the question generation.",
"This information, however, is not used while training the network, and is only used to investigate the attention mechanism.",
"Fig. 4 also shows attention accumulated by each word over all time-steps of the word's utterance.",
"We can clearly see that the relevant time-steps are attended to, depending on the image and the question itself.",
"To the best of our knowledge, this is the first work to jointly learn attention-based speech representations guided by external KG knowledge.",
"A new task FVSQA is presented in this work, along with an architecture that can perform cross-lingual knowledge acquisition for question-answering.",
"In the process, we demonstrate the first task to perform knowledge acquisition directly using a speech signal as an input.",
"This knowledge acquisition for speech can be extended to other tasks such as audio caption-based scene identification (Harwath et al., 2016) and multi-modal word discovery (Harwath et al., 2018).",
"Future work will include extending FVSQA to a multi-speaker setting, gathering spo-3 https://kaldi-asr.org/models/m1 Figure 5: Example of a fact-based visual question Question Which animal in this image is man's best friend?",
"Supporting fact [[dogs]] are [[man's best friend]] Subject, Predicate, Object (Dog, HasProperty, man's best friend) Answer Dog ken data from real-world speakers, as well as extending it to languages without an ASR system.",
"We now turn to discuss the ethical implications of this work.",
"Worldly-Wise relies on leveraging cross-lingual knowledge resources for question answering.",
"While this approach yields enormous benefits, care must be taken to evaluate appropriateness of the source of knowledge depending on the language.",
"What may be considered as conventional wisdom in one culture or language may not be true for another.",
"An example of how this manifests in our dataset is shown in Fig. 5.",
"The knowledge graph conveys conventional wisdom in English that A dog is man's best friend', and therefore the expected answer to this question is Dog'.",
"However, in regions where Hindi is spoken, the answer could equally be expected to be 'Cow' that appears in the image.",
"This example is quite informative, and if such an instance can occur in the extreme, it could lead to fairness issues.",
"This highlights the fundamental tradeoff involved in training such a cross-lingual system on knowledge generated in another language.",
"Governance of such a system is therefore essential to ensure cultural appropriateness and fairness in different contexts."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Text style transfer aims to alter the style (e.g., sentiment) of a sentence while preserving its content.",
"A common approach is to map a given sentence to content representation that is free of style, and the content representation is fed to a decoder with a target style.",
"Previous methods in filtering style completely remove tokens with style at the token level, which incurs the loss of content information.",
"In this paper, we propose to enhance content preservation by implicitly removing the style information of each token with reverse attention, and thereby retain the content.",
"Furthermore, we fuse content information when building the target style representation, making it dynamic with respect to the content.",
"Our method creates not only style-independent content representation, but also content-dependent style representation in transferring style.",
"Empirical results show that our method outperforms the state-of-the-art baselines by a large margin in terms of content preservation.",
"In addition, it is also competitive in terms of style transfer accuracy and fluency.",
"Style transfer is a popular task in computer vision and natural language processing.",
"It aims to convert an input with a certain style (e.g., sentiment, formality) into a different style while preserving the original content.",
"One mainstream approach is to separate style from content, and to generate a transferred sentence conditioned on the content information and a target style.",
"Recently, several models (Li et al., 2018; Xu et al., 2018; Wu et al., 2019) have proposed removing style information at the token level by filtering out tokens with style information, which are identified using either attention-based methods (Bahdanau et al., 2015) or frequency-ratio based methods (Wu et al., 2019).",
"This line of work is built upon the assumption that style is localized to to our knowledge , this is the best deal in phoenix .",
"certain tokens in a sentence, and a token has either content or style information, but not both .",
"Thus by utilizing a style marking module, the models filter out the style tokens entirely when constructing a style-independent content representation of the input sentence.",
"The drawback with the filtering method is that one needs to manually set a threshold to decide whether a token is stylistic or content-related.",
"Previous studies address this issue by using the average attention score as a threshold (Li et al., 2018; Xu et al., 2018; Wu et al., 2019).",
"A major shortcoming of this approach is the incapability of handling flat attention distribution.",
"When the distribution is flat, in which similar attention scores are assigned to tokens, the style marking module would remove/mask out more tokens than necessary.",
"This incurs information loss in content as depicted in Figure 1.",
"In this paper, we propose a novel method for text style transfer.",
"A key idea is to exploit the fact that a token often posses both style and content information.",
"For example, the word delicious is a token with strong style information, but it also implies the subject is food.",
"Such words play a pivotal role in representing style (e.g., positive sentiment) as well as presenting a hint at the subject matter/content (e.g., food).",
"The complete removal of such tokens leads to the loss of content information.",
"For the sake of enhancing content preservation, we propose a method to implicitly remove style at the token level using reverse attention .",
"We utilize knowledge attained from attention networks (Bah-danau et al., 2015) to estimate style information of a token, and suppress such signal to take out style.",
"Attention mechanism is known to attend to interdependent representations given a query.",
"In style classification task, an attention score could be interpreted as to what extent a token has style attribute.",
"If we can identify which tokens reveal stylistic property and to what extent, it is then possible to take the negation and to approximate the amount of content attribute within a token.",
"In this paper, we call it reverse attention.",
"We utilize such score to suppress the stylistic attribute of tokens, fully capturing content property.",
"This paper further enhances content preservation by fusing content information in creating target style representation.",
"Despite of extensive efforts in creating content representation, the previous work has overlooked building content-dependent style representations.",
"The common approach is to project the target style onto an embedding space, and share the style embedding among the same style as an input to the decoder.",
"However, our work sheds light on building content-related style by utilizing conditional layer normalization (CLN).",
"This module of ours takes in content representations, and creates content-dependent style representation by shaping the content variable to fit in the distribution of target style.",
"This way, our style representation varies according to the content of the input sequence even with the same target style.",
"Our method is based on two techniques, Reverse Attention and Conditional Layer Normalization, thus we call it RACoLN.",
"In empirical evaluation, RACoLN achieves the state-of-the-art performance in terms of content preservation, outperforming the previous state-of-the-art by a large margin, and shows competency in style transfer accuracy and fluency.",
"The contributions are as follows: We introduce reverse attention as a way to suppress style information while preserving content information when building a content representation of an input.",
"Aside from building style-independent content representation, our approach utilizes conditional layer normalization to construct content-dependent style representation.",
"Our model achieves state-of-the-art performance in terms of content preservation, outperforming current state-of-the-art by more than 4 BLEU score on Yelp dataset, and shows competency in other metrics as well.",
"In recent years, text style transfer in unsupervised learning environment has been studied and explored extensively.",
"Text style transfer task views a sentence as being comprised of content and style.",
"Thus, there have been attempts to disentangle the components (Shen et al., 2017; Li et al., 2018; Xu et al., 2018; Wu et al., 2019).",
"Shen et al. (2017) map a sentence to a shared content space among styles to create style-independent content variable.",
"Some studies view style as localized feature of sentences.",
"Xu et al. (2018) propose to identify style tokens with attention mechanism, and filter out such tokens.",
"Frequency-based is proposed to enhance the filtering process (Wu et al., 2019).",
"This stream of work is similar to our work in that the objective is to take out style at the token level, but different since ours does not remove tokens completely.",
"Instead of disentangling content and style, other papers focus on revising an entangled representation of an input.",
"A few previous studies utilize a pre-trained classifier and edit entangled latent variable until it contains target style using the gradient-based optimization (Wang et al., 2019; Liu et al., 2020).",
"He et al. (2020) view each domain of data as a partially observable variable, and transfer sentence using amortized variational inference.",
"Dai et al. (2019) use the transformer architecture and rewrite style in the entangled representation at the decoder.",
"We consider this model as the strongest baseline model in terms of content preservation.",
"In the domain of computer vision, it is a prevalent practice to exploit variants of normalization to transfer style (Dumoulin et al., 2017; Ulyanov et al., 2016).",
"Dumoulin et al. (2017) proposed conditional instance normalization (CIN) in which each style is assigned with separate instance normalization parameter, in other words, a model learns separate gain and bias parameters of instance normalization for each style.",
"Our work differs in several ways.",
"Style transfer in image views style transfer as changing the texture of an image.",
"Therefore, Dumoulin et al. (2017) place CIN module following every convolution layer, painting with style-specific parameters on the content representation.",
"Therefore, the T h e f ood i s d e li c i ou s Pre-trained GRUGRUGRU style-independent content representation content-dependent style representation CLNT h e f ood i s d e li c i ou s T h e f ood i s b l a nd 1 3 4 2 Embedding Attention Reverse Attention Initial Hidden State Style Marker Module 1 Removing Style 2 Stylizer 3 4 Decoder Input x E ( x ) z x z s z x z s x s Stop Gradient E ( x ) Output Figure 2: Input x first passes style marker module for computing reverse attention.",
"network passes on entangled representation of an image.",
"Our work is different in that we disentangle content and style, thus we do not overwrite content with style-specific parameters.",
"In addition, we apply CLN only once before passing it to decoder.",
"Let D = { ( x i , s i ) Ni =1 } be a training corpus, where each x i is a sentence, and s i is its style label.",
"Our experiments were carried on a sentiment analysis task, where there are two style labels, namely pos-itive and negative.",
"The task is to learn from D a model x s = f ( x , s ) , with parameters , that takes an input sentence x and a target style s as inputs, and outputs a new sentence x s that is in the target style and retains the content information of x .",
"We conduct this task in an unsupervised environment in which ground truth sentence x s is not provided.",
"To achieve our goal, we employ a style classifier s = C ( x ) that takes a sentence x as input and returns its style label.",
"We pre-train such model on D and keep it frozen in the process of learning f .",
"Given the style classifier C ( x ) , our task becomes to learn a model x s = f ( x , s ) such that C ( x s ) = s .",
"As such, the task is conceptually similar to adversarial attack: The input x is from the style class s , and we want to modify it so that it will be classified into the target style class s .",
"The architecture of our model f is shown in Figure 2, which will some times referred to as the generator network.",
"It consists of an encoder, a stylizer and a decoder.",
"The encoder maps an input sequence x into a style-independent representation z x .",
"Particularly, the encoder has a style marker module that computes attention scores of input tokens, and it reverses them to estimate the content information.",
"The reversed attention scores are applied to the token embedding E ( x ) and the results E (cid:48) ( x ) are fed to bidirectional GRU to produce z x .",
"The stylizer takes a target style s and the content representation z x as inputs, and produces a content-related style representation z s .",
"Finally, the decoder takes the content representation z x and style representation z s as inputs, and generates a new sequence x s .",
"Let x = [ x 1 , x 2 , . . . , x T ] be a length T sequence of input with a style s .",
"The style marker module is pre-trained in order to calculate the amount of style information in each token in a given input.",
"We use one layer of bidirectional GRU with attention (Yang et al., 2016).",
"where h t is the hidden representation from the bidirectional GRU at time step t .",
"u is learnable parameters initialized with random weights, and denotes the temperature in softmax.",
"When pre-training the style marker module, we construct a sentence representation by taking the weighted sum of the token representations with the weights being the attention scores, and feed the context vector to a fully-connected layer.",
"The cross-entropy loss is used to learn the parameters of the style marker module.",
"The attention scores in the style marker indicate what tokens are important to style classification, and to what extent.",
"Those scores will be reversed in the next section to reveal the content information.",
"The fully-connected layer of the style marker module is no longer needed once the style marker module is trained.",
"It is hence removed.",
"Using attention score from the pre-trained style marker module, we propose to implicitly remove the style information in each token.",
"We negate the extent of style information in each token to estimate the extent of content information, namely reverse attention.",
"where t is an attention value from style marker module, and t is the corresponding reverse attention score.",
"We multiply the reverse attention scores to the embedding vectors of tokens.",
"Intuitively, this can be viewed as implicitly removing the stylistic attribute of tokens, suppressing the",
"to produce a content representation z x , which is the last hidden state of the bidirectional GRU.",
"By utilizing reverse attention, we map a sentence to style-independent content representation.",
"The goal of the stylizer is to create a content-related style representation.",
"We do this by applying conditional layer normalization on the content representation z x from encoder as input to this module.",
"Layer normalization requires the number of gain and bias parameters to match the size of input representation.",
"Therefore, mainly for the purpose of shrinking the size, we perform affine transformation on the content variable.",
"The representation is then fed to conditional layer normalization so that the representation falls into target style distribution in style space.",
"Specifically, z s = CLN ( z x ; s ) = s (cid:12) N ( z x ) + s (9) N ( z x ) = z x (10) where and are mean and standard deviation of input vector respectively, and s is target style.",
"Our model learns separate s (gain) and s (bias) parameters for different styles.",
"Normalization method is commonly used to change feature values in common scale, but known to implicitly keep the features.",
"Therefore, we argue that the normalized content feature values retain content information of the content variable.",
"By passing through conditional layer normalization module, the content latent vector is scaled and shifted with style-specific gain and bias parameter, falling into target style distribution.",
"Thus, unlike previous attempts in text style transfer, the style representation is dynamic respect to the content, being content-dependent embedding.",
"In order to block backpropagation signal related to style flowing into z x , we apply stop gradient on z x before feeding it to stylizer.",
"The decoder generates a sentence with the target style conditioned on content-related style representation and content representation.",
"We construct our decoder using one single layer of GRU.",
"As briefly discussed in Section 3.2, the outputs from our generator are further passed on for different loss functions.",
"However, sampling process or greedy decoding does not allow gradient to flow, because the methods are not differentiable.",
"Therefore, we use soft sampling to keep the gradient flow.",
"Specifically, when the gradient flow is required through the outputs, we take the product of probability distribution of each time step and the weight of embedding layer to project the outputs onto word embedding space.",
"We empirically found that soft sampling is more suitable in our environment than gumbel-softmax (Jang et al., 2017).",
"Due to the lack of parallel corpus, we cannot train generator network with maximum likelihood estimation on style transfer ability.",
"Therefore, this paper employs a pre-trained classifier C ( x ) to train our generator on transferring style.",
"Our classifier network has the same structure as style marker module with fully-connected layer appended, nonetheless, it is a separate model obtained from a different set of initial model parameters.",
"We use the cross-entropy loss for training: L pre = E ( x ,s ) D [log p C ( s | x s )] (12) We freeze the weights of this network after it has been fully trained.",
"As shown in Figure 3, our loss function consists of four parts: a self reconstruction loss L self , a cycle reconstruction loss L cycle , a content loss L content , and a style transfer loss L style .",
"Let ( x , s ) D be a training example.",
"If we ask our model to f ( x , s ) to transfer the input into its original style, i.e., s = s , we would expect it to reconstruct the input.",
"where z x is the content representation of the input x , z s is the representation of the style s , and p D is the conditional distribution over sequences defined by the decoder.",
"Suppose we first transfer a sequence x into another style s to get x s using soft sampling, and then transfer x s back to the original style s .",
"We would expect to reconstruct the input x .",
"Hence we have the following cycle construction loss: L cycle = E ( x ,s ) D [log p D ( x | z x s , z s )] (14) where z x s is the content representation of the transferred sequence x s .",
"In the aforementioned cycle reconstruction process, we obtain a content representation z x of the input x and a content representation z x s of the transferred sequence x s .",
"As the two transfer steps presumably involve only style but not content, the two content representations should be similar.",
"Hence we have the following content loss: L content = E ( x ,s ) D || z x z x s || 22 (15) 1 Strictly speaking, the quantity is not well-defined because there is no description of how the target style s is picked.",
"In our experiments, we use data with two styles.",
"So, the target style just means the other style.",
"To apply the method to problems with multiple styles, random sampling of different style should be added.",
"This remark applies also to the two loss terms to be introduced below.",
"L style = E ( x ,s ) D [log p C ( s | x s )] (16)",
"where p C is the conditional distribution over styles defined by the style classifier C ( x ) .",
"As mentioned in Section 3.5, x s was generated with soft sampling.",
"In summary, we balance the four loss functions to train our model.",
"where i is balancing parameter.",
"Our study uses Yelp review dataset (Li et al., 2018) which contains 266K positive and 177K negative reviews.",
"Test set contains a total of 1000 sentences, 500 positive and 500 negative, and human-annotated sentences are provided which are used in measuring content preservation.",
"Another dataset we test is IMDB movie review dataset (Dai et al., 2019).",
"This dataset is comprised of 17.9K positive and 18.8K negative reviews for training corpus, and 2K sentences are used for testing.",
"Style transfer accuracy (S-ACC) measures whether the generated sentences reveal target style property.",
"We have mentioned a style classifier before: C ( x ) which is used in the loss function.",
"To evaluate transfer accuracy, we train another style classifier C eval ( x ) .",
"It has the identical architecture as before and trained on the same data, except from a different set of initial model parameters.",
"We utilize such structure due to its superior performance compared to that of commonly used CNN-based classifier (Kim, 2014).",
"Our evaluation classifier achieves accuracy of 97.8% on Yelp and 98.9% on IMDB, which are higher than that of CNN-based.",
"A well-transferred sentence must maintain its content.",
"In this paper, content preservation was evaluated with two BLEU scores (Papineni et al., 2002), one between generated sentence and input sentence (self-BLEU), and the other with human-generated sentence (ref-BLEU).",
"With this metric, one can evaluate how a sentence maintains its content throughout inference.",
"A natural language generation task aims to output a sentence, which is not only task-specific, but also fluent.",
"This study measures perplexity (PPL) of generated sentences in order to measure fluency.",
"Following (Dai et al., 2019), we use 5-gram KenLM (Heafield, 2011) trained on the two training datasets.",
"A lower PPL score indicates a transferred sentence is more fluent.",
"Zhang et al. (2020) proposed BERT score which computes contextual similarity of two sentences.",
"Previous methods, such as BLEU score, compute n-gram matching score, while BERT score evaluates the contextual embedding of the tokens obtained from pre-trained BERT (Devlin et al., 2019).",
"This evaluation metric has been shown to correlate with human judgement, thus our paper includes BERT score between model generated output and the human reference sentences.",
"We report precision, recall, and F1 score.",
"In addition to automatic evaluation, we validate the generated outputs with human evaluation.",
"With each model, we randomly sample 150 outputs from each of the two datasets, total of 300 outputs per model.",
"Given the target style and the original sentence, the annotators are asked to evaluate the model generated sentence with a score range from 1 (Very Bad) to 5 (Very Good) on content preservation, style transfer accuracy, and fluency.",
"We report the average scores from the 4 hired annotators in Table 3.",
"In this paper, we set the embedding size to 128 dimension and hidden representation dimension of",
"encoder to 500.",
"The size of bias and gain parameters of conditional layer norm is 200, and the size of hidden representation for decoder is set to 700 to condition on both content and style representation.",
"Adam optimizer (Kingma and Ba, 2015) was used to update parameter with learning rate set to 0.0005.",
"For balancing parameters of total loss function, we set to 0.5 for 1 and 2 , and 1 for the rest.",
"We compare our model with the baseline models, and the automatic evaluation result is presented in Table 1.",
"Our model outperforms the baseline 2 https://github.com/shentianxiao/ language-style-transfer 3 https://github.com/asyml/texar/tree/ master/examples/text_style_transfer 4 https://github.com/fastnlp/ style-transformer 5 https://github.com/cindyxinyiwang/ deep-latent-sequence-model models in terms of content preservation on both of the datasets.",
"Especially, on Yelp dataset, our model achieves 59.4 self-BLEU score, surpassing the previous state-of-the-art model by more than 4 points.",
"Furthermore, our model also achieves the state-of-the-art result in content preservation on IMDB dataset, which is comprised of longer sequences than those of Yelp.",
"In terms of style transfer accuracy and fluency, our model is highly competitive.",
"Our model achieves the highest score in style transfer accuracy on both of the datasets (91.3 on Yelp and 83.1 on IMDB).",
"Additionally, our model shows the ability to produce fluent sentences as shown in the perplexity score.",
"In terms of the BERT scores, the proposed model performs the best, having the highest contextual similarity with the human reference among the style transfer models.",
"With the automatic evaluation result, we see a trend of trade-off.",
"Most of the baseline models are good at particular metric, but show room for improvement on other metrics.",
"For example, Deep Latent and Cross-Alignment constantly perform well in terms of perplexity, but their ability to transfer style and preserving content needs improvement.",
"Style Transformer achieves comparable performance across all evaluation metrics, but our model outperforms the model on every metric on both of the datasets.",
"Therefore, the result shows that our model is well-balanced but also strong in every aspect in text style transfer task.",
"As for the human evaluation, we observe that the result mainly conform with the automatic evaluation.",
"Our model received the highest score on the style and content evaluation metric on both of the datasets by a large margin compared to the other baselines.",
"Moreover, the fluency score is comparable with that of Deep Latent model, showing its competency in creating a fluent output.",
"Both automatic and human evaluation depict the strength of Table 4: Sample outputs generated by the baseline models and our approach on Yelp and IMDB dataset.",
"Original Input The plot is clumsy and has holes in it .",
"Cross-Alignment The worst film is one of the worst movies i 've ever seen .",
"ControlledGen The plot is top-notch and has one-liners in it .",
"Deep Latent The plot is tight and has found it in a very well done .",
"Style Transformer The plot is joys and has flynn in it .",
"RACoLN (Ours) The plot is incredible and has twists in it .",
"We visualize the test dataset of Yelp projected on content and style space using t-SNE in Figure",
"4. It is clearly observed that the content representations ( z x ) are spread across content space, showing that the representations are independent of style.",
"After the content representations go through the stylizer module, there is a clear distinction between different styles representations ( z s ) in style space.",
"This is in sharp contrast to the corresponding distributions of the style-independent content representations shown on the right of the figure.",
"The figure clearly depicts how style-specific parameters in the stylizer module shape the content representations to fall in the target style distribution.",
"This figure illustrates how our model successfully removes style at the encoder, and constructs content-related style at the stylizer module.",
"In order to validate the proposed modules, we conduct ablation study on Yelp dataset which is pre-Style",
"sented in Table",
"5. We observe a significant drop across all aspects without the reverse attention module.",
"In other case, where we remove the stylizer module and use style embedding as in the previous papers, the model loses the ability to retain content, drop of around 6 score on self-BLEU.",
"We find that the two core components are interdependent in successfully transferring style in text.",
"Lastly, as for the loss functions, incorporating L content brings a meaningful increase in content preservation.",
"6 5 Conclusion In this paper, we introduce a way to implicitly remove style at the token level using reverse attention, and fuse content information to style representation using conditional layer normalization.",
"With the two core components, our model is able to enhance content preservation while keeping the outputs fluent with target style.",
"Both automatic and human evaluation shows that our model has the best ability in preserving content and is strong in other metrics as well.",
"In the future, we plan to study problems with more than two styles and apply multiple attribute 6 Other loss functions were not included, since the loss functions have been extensively tested and explored in previous papers (Prabhumoye et al., 2018; Dai et al., 2019).",
"Research on this paper was supported by Hong Kong Research Grants Council under grant 16204920 and Tencent AI Lab Rhino-Bird Focused Research Program (No. GF202035).",
"A text style transfer model is a conditional generative model, in which the condition is the target style.",
"This makes a wide range of applications possible, since a style can be defined as any common feature in a corpus, such as formality, tense, sentiment, etc.",
"However, at the same time, due to its inherent functionality, a text style transfer model can pose potential harm when used with a malicious intention.",
"It can lead to a situation where one deliberately distorts a sentence for his or her own benefit.",
"To give an example in a political context, political stance can be viewed a style in political slant dataset (Voigt et al., 2018) as in (Prabhumoye et al., 2018).",
"If one intentionally changes the style (polit-ical stance) of a person with the proposed model structure, the generated output can be exploited to create fake news or misinformation.",
"One possible remedy for such potentially problematic situation is to employ fact checking system as a safety measure (Nadeem et al., 2019).",
"We are fully aware that fact checking is not the fundamental solution to the potential harm that text style transfer models possess.",
"Nevertheless, one can filter out misleading information using the system in certain domains (i.e., politics), lowering the level of the danger that can be otherwise posed by style transfer.",
"In conclusion, such problem is shared among conditional generative models in general, and future studies on how to mitigate this problem are in crucial need.",
"Our work validates the proposed model and the baseline models on human evaluation, in which manual work was involved.",
"Thus, we disclose the compensation level given to the hired annotators.",
"The average lengths of the two corpora tested are 10.3 words for Yelp and 15.5 words for IMDB.",
"In addition, the annotation was performed on sentence-level, in which the annotators were asked to score a model generated sentence.",
"Considering the length and the difficulty, the expected annotations per hour was 100 sentences.",
"The hourly pay was set to 100 Hong Kong dollars (HK$), which is higher than Hong Kong's statutory minimum wage.",
"The annotators evaluated 1,500 sentences in total (750 sentences per dataset), thus each annotator was compensated with the total amount of HK$1,500."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo's contextualized word vectors.",
"First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2) the trained ELMo embeddings systematically encode gender information and (3) ELMo unequally encodes gender information about male and female entities.",
"Then, we show that a state-of-the-art coreference system that depends on ELMo inherits its bias and demonstrates significant bias on the WinoBias probing corpus.",
"Finally, we explore two methods to mitigate such gender bias and show that the bias demonstrated on WinoBias can be eliminated.",
"Distributed representations of words in the form of word embeddings (Mikolov et al., 2013; Pennington et al., 2014) and contextualized word embeddings (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2018; McCann et al., 2017; Radford et al., 2019) have led to huge performance improvement on many NLP tasks.",
"However, several re-cent studies show that training word embeddings in large corpora could lead to encoding societal biases present in these human-produced data (Bolukbasi et al., 2016; Caliskan et al., 2017).",
"In this work, we extend these analyses to the ELMo contextualized word embeddings.",
"Our work provides a new intrinsic analysis of how ELMo represents gender in biased ways.",
"First, the corpus used for training ELMo has a significant gender skew: male entities are nearly three times more common than female entities, which leads to gender bias in the downloadable pre-trained contextualized embeddings.",
"Then, we apply principal component analysis (PCA) to show that after training on such biased corpora, there exists a low-dimensional subspace that captures much of the gender information in the contextualized embeddings.",
"Finally, we evaluate how faithfully ELMo preserves gender information in sentences by measuring how predictable gender is from ELMo representations of occupation words that co-occur with gender revealing pronouns.",
"Our results show that ELMo embeddings perform unequally on male and female pronouns: male entities can be predicted from occupation words 14% more accurately than female entities.",
"In addition, we examine how gender bias in ELMo propagates to the downstream applications.",
"Specifically, we evaluate a state-of-the-art coreference resolution system (Lee et al., 2018) that makes use of ELMo's contextual embeddings on WinoBias (Zhao et al., 2018a), a coreference diagnostic dataset that evaluates whether systems behave differently on decisions involving male and female entities of stereotyped or anti-stereotyped occupations.",
"We find that in the most challenging setting, the ELMo-based system has a disparity in accuracy between proand anti-stereotypical predictions, which is nearly 30% higher than a similar system based on GloVe (Lee et al., 2017).",
"Finally, we investigate approaches for mitigating the bias which propagates from the contextualized word embeddings to a coreference resolution system.",
"We explore two different strategies: (1) a training-time data augmentation technique (Zhao et al., 2018a), where we augment the corpus for training the coreference system with its gender-swapped variant (female entities are swapped to male entities and vice versa) and, afterwards, retrain the coreference system; and (2) a test-time embedding neutralization technique, where input contextualized word representations are averaged with word representations of a sentence with entities of the opposite gender.",
"Results show that test-time embedding neutralization is only partially effective, while data augmentation largely mitigates bias demonstrated on WinoBias by the coreference system.",
"Gender bias has been shown to affect several real-world applications relying on automatic language analysis, including online news (Ross and Carter, 2011), advertisements (Sweeney, 2013), abusive language detection (Park et al., 2018), machine translation (Font and Costa-juss`a, 2019; Vanmassen-hove et al., 2018), and web search (Kay et al., 2015).",
"In many cases, a model not only replicates bias in the training data but also amplifies it (Zhao et al., 2017).",
"For word representations, Bolukbasi et al. (2016) and Caliskan et al. (2017) show that word embeddings encode societal biases about gender roles and occupations, e.g. engineers are stereotypically men, and nurses are stereotypically women.",
"As a consequence, downstream applications that use these pretrained word embeddings also reflect this bias.",
"For example, Zhao et al. (2018a) and Rudinger et al. (2018) show that coreference resolution systems relying on word embeddings encode such occupational stereotypes.",
"In concurrent work, May et al. (2019) measure gender bias in sentence embeddings, but their evaluation is on the aggregation of word representations.",
"In contrast, we analyze bias in contextualized word representations and its effect on a downstream task.",
"To mitigate bias from word embeddings, Bolukbasi et al. (2016) propose a post-processing method to project out the bias subspace from the pre-trained embeddings.",
"Their method is shown to reduce the gender information from the embeddings of gender-neutral words, and, remarkably, maintains the same level of performance on different downstream NLP tasks.",
"Zhao et al. (2018b) further propose a training mechanism to separate gender information from other factors.",
"However, Gonen and Goldberg (2019) argue that entirely removing bias is difficult, if not impossible, and the gender bias information can be often recovered.",
"This paper investigates a natural follow-up question: What are effective bias mitigation techniques for contextualized embeddings?",
"In this section we describe three intrinsic analyses highlighting gender bias in trained ELMo contextual word embeddings (Peters et al., 2018).",
"We show that (1) training data for ELMo contains sig-#occurrence #M-biased occs.",
"#F-biased occs.",
"nificantly more male entities compared to female entities leading to gender bias in the pre-trained contextual word embeddings (2) the geometry of trained ELMo embeddings systematically encodes gender information and (3) ELMo propagates gender information about male and female entities unequally.",
"Table 1 lists the data analysis on the One Billion Word Benchmark (Chelba et al., 2013) corpus, the training corpus for ELMo.",
"We show counts for the number of occurrences of male pronouns ( he , his and him ) and female pronouns ( she and her ) in the corpus as well as the co-occurrence of occupation words with those pronouns.",
"We use the set of occupation words defined in the WinoBias corpus and their assignments as prototypically male or female (Zhao et al., 2018a).",
"The analysis shows that the Billion Word corpus contains a significant skew with respect to gender: (1) male pronouns occur three times more than female pronouns and (2) male pronouns co-occur more frequently with occupation words, irrespective of whether they are prototypically male or female.",
"Next, we analyze the gender subspace in ELMo.",
"We first sample 400 sentences with at least one gendered word (e.g., he or she from the OntoNotes 5.0 dataset (Weischedel et al., 2012) and generate the corresponding gender-swapped variants (changing he to she and vice-versa).",
"We then calculate the difference of ELMo embeddings between occupation words in corresponding sentences and conduct principal component analysis for all pairs of sentences.",
"Figure 1 shows there are two principal components for gender in ELMo, in contrast to GloVe which only has one (Bolukbasi et al., 2016).",
"The two principal components in ELMo seem to represent the gender from the contextual information (Con-textual Gender) as well as the gender embedded in the word itself (Occupational Gender).",
"To visualize the gender subspace, we pick a few sentence pairs from WinoBias (Zhao et al., 2018a).",
"Each sentence in the corpus contains one gendered pronoun and two occupation words, such as The developer corrected the secretary because she made a mistake and also the same sentence with the opposite pronoun (he).",
"In Figure 1 on the right, we project the ELMo embeddings of occupation words that are co-referent with the pronoun (e.g. secretary in the above example) for when the pronoun is male (blue dots) and female (orange dots) on the two principal components from the PCA analysis.",
"Qualitatively, we can see the first component separates male and female contexts while the second component groups male related words such as lawyer and developer and female related words such as cashier and nurse .",
"To test how ELMo embeds gender information in contextualized word embeddings, we train a classifier to predict the gender of entities from occupation words in the same sentence.",
"We collect sentences containing gendered words (e.g., he she , father mother ) and occupation words (e.g., doc-tor ) 1 from the OntoNotes 5.0 corpus (Weischedel et al., 2012), where we treat occupation words as a mention to an entity, and the gender of that entity is taken to the gender of a co-referring gendered word, if one exists.",
"For example, in the sentence the engineer went back to her home, we take engineer to be a female mention.",
"Then we split all such instances into training and test, with 539 and 62 instances, respectively and augment these sentences by swapping all the gendered words with words of the opposite gender such that the numbers of male 1 We use the list collected in (Zhao et al., 2018a) and female entities are balanced.",
"We first test if ELMo embedding vectors carry gender information.",
"We train an SVM classifier with an RBF kernel 2 to predict the gender of a mention (i.e., an occupation word) based on its ELMo embedding.",
"On development data, this classifier achieves 95.1% and 80.6% accuracy on sentences where the true gender was male and female respectively.",
"For both male and female contexts, the accuracy is much larger than 50%, demonstrating that ELMo does propagate gender information to other words.",
"However, male information is more than 14% more accurately represented in ELMo than female information, showing that ELMo propagates the information unequally for male and female entities.",
"In this section, we establish that coreference systems that depend on ELMo embeddings exhibit significant gender bias.",
"Then we evaluate two simple methods for removing the bias from the systems and show that the bias can largely be reduced.",
"We evaluate bias with respect to the WinoBias dataset (Zhao et al., 2018a), a benchmark of paired male and female coreference resolution examples following the Winograd format (Hirst, 1981; Rahman and Ng, 2012; Peng et al., 2015).",
"It contains two different subsets, pro-stereotype, where pronouns are associated with occupations predominately associated with the gender of the pronoun, or anti-stereotype, when the opposite relation is true.",
"2 We use the -SVC formulation and tune the hyper-parameter (Chang and Lin, 2011) in the range of [0 . 1 , 1] with a step 0.1.",
"Each subset consists of two types of sentences: one that requires semantic understanding of the sentence to make coreference resolution (Semantics Only) and another that relies on syntactic cues (w/ Syntactic Cues).",
"Gender bias is measured by taking the difference of the performance in proand anti-stereotypical subsets.",
"Previous work (Zhao et al., 2018a) evaluated the systems based on GloVe embeddings but here we evaluate a state-of-the-art system that trained on the OntoNotes corpus with ELMo embeddings (Lee et al., 2018).",
"Next, we describe two methods for mitigating bias in ELMo for the purpose of coreference resolution: (1) a train-time data augmentation approach and (2) a test-time neutralization approach.",
"Data Augmentation Zhao et al. (2018a) propose a method to reduce gender bias in coreference resolution by augmenting the training corpus for this task.",
"Data augmentation is performed by replacing gender revealing entities in the OntoNotes dataset with words indicating the opposite gender and then training on the union of the original data and this swapped data.",
"In addition, they find it useful to also mitigate bias in supporting resources and therefore replace standard GloVe embeddings with bias mitigated word embeddings from Bolukbasi et al. (2016).",
"We evaluate the performance of both aspects of this approach.",
"Neutralization We also investigate an approach to mitigate bias induced by ELMo embeddings without retraining the coreference model.",
"Instead of augmenting training corpus by swapping gender words, we generate a gender-swapped version of the test instances.",
"We then apply ELMo to obtain contextualized word representations of the original and the gender-swapped sentences and use their average as the final representations.",
"ELMo Bias Transfers to Coreference Row 3 in Table 2 summarizes performance of the ELMo based coreference system on WinoBias.",
"While ELMo helps to boost the coreference resolution F1 score (OntoNotes) it also propagates bias to the task.",
"It exhibits large differences between proand anti-stereotyped sets ( | Diff | ) on both semantic and syntactic examples in WinoBias.",
"Bias Mitigation Rows 4-6 in Table 2 summarize the effectiveness of the two bias mitigation approaches we consider.",
"Data augmentation is largely effective at mitigating bias in the coreference resolution system with ELMo (reducing | Diff | to insignificant levels) but requires retraining the system.",
"Neutralization is less effective than augmentation and cannot fully remove gender bias on the Semantics Only portion of WinoBias, indicating it is effective only for simpler cases.",
"This observation is consistent with Gonen and Goldberg (2019), where they show that entirely removing bias from an embedding is difficult and depends on the manner, by which one measures the bias.",
"Like word embedding models, contextualized word embeddings inherit implicit gender bias.",
"We analyzed gender bias in ELMo, showing that the corpus it is trained on has significant gender skew and that ELMo is sensitive to gender, but unequally so for male and female entities.",
"We also showed this bias transfers to downstream tasks, such as coreference resolution, and explored two bias mitigation strategies:",
"1) data augmentation and",
"2) neutralizing embeddings, effectively eliminating the bias from ELMo in a state-of-the-art system.",
"With increasing adoption of contextualized embeddings to get better results on core NLP tasks, e.g. BERT (Devlin et al., 2018), we must be careful how such unsupervised methods perpetuate bias to downstream applications and our work forms the basis of evaluating and mitigating such bias.",
"This work was supported in part by National Science Foundation Grant IIS-1760523.",
"RC was supported by a Facebook Fellowship.",
"We also acknowledge partial support from the Institute of the the Humanities and Global Cultures at the University of Virginia.",
"We thank all reviewers for their comments."
] | [
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"result",
"result",
"abstain",
"method",
"result",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other"
] |
[
"We investigate the task of mining relevant stocks given a topic of concern on emerging capital markets, for which there is lack of structural understanding.",
"Deep learning is leveraged to mine evidences from large scale textual data, which contain valuable market information.",
"In particular, distributed word similarities trained over large scale raw texts are taken as a basis of relevance measuring, and deep reinforcement learning is leveraged to learn a strategy of topic expansion, given a small amount of manually labeled data from financial analysts.",
"Results on two Chinese stock market datasets show that our method outperforms a strong baseline using information retrieval techniques.",
"Stock prices are affected by events.",
"For example, recent announcement of a state plan to build a new economic region, Xiong'an near Beijing, by the Chinese government has led to the rise of hundreds of stocks, which can directly or indirectly benefit from the plan.",
"As a second example, the winning of a lawsuit against IP (Intellectual Property) breach can strengthen investors' confi-dence on technological and entertainment companies.",
"We refer to the topics or themes of such events (e.g. Xiong'an and IP) as concepts and their relevant stocks as concept stocks .",
"Given a news event, it can be highly useful for investors to find a list of relevant concept stocks for making investment decisions.",
"For popular concepts, lists of relevant concept stocks can be found from analyst reports from financial websites.",
"On the other hand, concepts are dynamic and flexible.",
"In addition, insights can be relatively scarce for emerging capital markets, such as the Chinese market, which had been closed to foreign investments before 2015.",
"It is therefore Xiong'an Baiyangdian Anxincounty Rongcheng Baoding city Hebei province EastHebei Beijing-Tianjin-Hebei Hebei government Shijiazhuang city Coordinated development of Beijing-Tianjin-Hebei Economic district Binhainew area Figure 1: Concept relatedness.",
"a challenging research question how to automatically find out potentially relevant stocks given a topic of interest, from a large market of multi-thousand equities.",
"Intuitively, evidences between concepts and stocks exist in text documents over the Internet.",
"For example, news articles report events and companies involved.",
"In addition, company filings such as annual/quarter reports contain factual knowledge about stocks, which can also be useful background information.",
"For example, knowing that a company invests heavily on research is useful for correlating the company with IP-protection laws.",
"Such evidence-mining process can involve multiple steps.",
"As shown in Figure 1, starting from concept, Xiong'an, one might learn that the new economical region is located in the Baiyangdian area, which is further located in Hebei province.",
"By further reading, one can infer that the new economic region is related with the coordinated development plan for the Beijing-Tianjin-Hebei region, and therefore benefit a wider range of stocks.",
"Based on the intuition above, we build a neural model for mining evidences for concept stock recommendation.",
"The basis of our model is distributed similarities between concepts and stocks, obtained from embeddings trained over large-scale raw documents.",
"Embedding similarities encode correlations from direct narrative evidence within context windows.",
"To further include a multi-step 2103 evidence mining, we build an iterative model for concept expansion , augmenting a given concept by iteratively adding more relevant concepts from background documents.",
"As demonstrated in Figure 1, this process can be ambiguous, since there can be multiple directions for further reading given a set of concepts.",
"We leverage a small amount of manually labeled data, downloaded from financial analysis websites, for guiding evidence mining.",
"In particular, we take a reinforcement learning method, which regards the evidence mining process as a decision process.",
"The starting point is a given input concept, such as Xiong'an or Electronic Vehicle.",
"At each step, a decision is made to stop further reading, or to continue adding related concepts to the set of concepts being considered.",
"Existing concepts can also be removed from further consideration.",
"Documents that discuss each concept are used to support the decision.",
"After the process stops, relevant stocks to the set of concepts are recommended.",
"The decision process is guided using a neural network model structure, trained with a loss function over the quality of the finally recommended stocks.",
"Results on two Chinese datasets show that our method outperforms a strong ranking-based baseline, which utilizes only direct evidences.",
"Our method can be easily adapted for other markets given the availability of a small amount of training data.",
"Our code is released 1 .",
"Our work is related to information retrieval and query expansion, where a concept can be regarded as a query and relevant stocks can be regarded as retrieved results.",
"We rely on external evidence for correlating concepts and stocks.",
"Ranking is an important problem in information retrieval.",
"We focus on ranking using neural models here.",
"One line of work (Shen et al., 2014a,b) models queries and documents using convolutional neural network and ranks the documents pair-wise or list-wise.",
"Another related method (Cao et al., 2015) adopts recursive neural networks to rank sentences for multi-document summarization.",
"These methods requires massive annotated data, which is expensive to obtain for concept stock recommendation.",
"et al., 2008; Preston and Colman, 2000) utilizes a feedback-based relevance model to expand queries.",
"Another line applies language modeling to estimate conditional probabilities of concepts given a query, and expands the query with the most probable concepts (Bai et al., 2005; Carpineto and Romano, 2012).",
"Recently, word embeddings are adopted for query expansion (Kuzi et al., 2016; Diaz et al., 2016).",
"Our framework belongs to this line of work with a difference that we use reinforcement learning to dynamically expand queries instead of following handcrafted rules such as using k -nearest neighbors.",
"Reinforcement Learning : Our work aligns with existing work using reinforcement learning to collect evidences.",
"Narasimhan et al. (2016) utilize external evidence to improve information extraction.",
"While the work requires handcrafted features, our model uses dense embedding features.",
"Athukorala et al. (2016) devise an interactive search engine balancing exploration and exploitation.",
"Their work relies on user interaction to make decisions.",
"In contrast, our work does not rely on active feedback, which can be expensive to obtain under our settings.",
"Rodrigo and Cho (2017) introduce a query reformulation system based on reinforcement learning that rewrites a long and complex query to maximize the number of relevant documents returned.",
"Differently, we do not assume complex queries and focus on recommending relevant stocks in our system.",
"Zhong et al. (2017) solves a different problem, i.e. translating natural language questions to corresponding SQL queries.",
"Our task is to find stocks relevant to a concept according to a variety of data sources, such as news, tweets and company files.",
"Formally, given a concept c , a set of m stocks { o i } mi =1 and n data sources { S i } ni =1 , where each S i is a set of documents { D ij } | S i | j =1 and each D ij is a sequence of words w 1 , w 2 ...w | D ij | , we assume the relevant stocks of the concept are revealed in the data sources (e.g. we discover PetroChina as a concept stock of petroleum' from the document PetroChina acquires Keppel's entire stake in Singapore Petroleum') and the task is to automatically discover these relations and select a subset of stocks as c 's concept stocks based on the data sources { S i } ni =1 .",
"Motivated by the success of embedding-based models (Mikolov et al., 2013; Pennington et al., 2014) in capturing semantic regularities, we use embeddings to represent concepts, stocks and documents.",
"In particular, we adopt Chinese word segmentation (Yang et al., 2017) to obtain words from documents.",
"Doc2Vec (Le and Mikolov, 2014) is then used on the documents of each data source S i to obtain a local word embedding matrix E i and a local document embedding matrix F i , where each column of E i ( F i ) corresponds to a word (doc-ument) vector representation of S i .",
"In particular, we use embeddings, E ic and E io as the local concept representation of c and the local stock representation of o in data source S i , respectively.",
"Furthermore, we obtain a global word embedding matrix E by averaging the local embedding matrices, E 1 ...E n , where E c and E o are regarded as the global concept representation of c and the global stock representation of o , respectively.",
"Inspired by Shen et al. (2014a; 2014b), our ranking baseline discriminatively projects the representations of concepts and evidences of stocks into a semantic space for measuring their relevances.",
"Mining Evidences : Formally, given a concept c and a stock o , we consult the data sources, retrieving the set of documents { D ic,o } most relevant to ( c , o ) from each data source S i as evidences.",
"To obtain evidences, we use c 's local embedding E ic and o 's local embedding E io for representing the stock-concept pair ( c , o ).",
"Cosine similarities are calculated between E ic + E io and each column of F i for measuring the semantic relatedness of each document to ( c , o ).",
"Suppose that the columns are normalized, the scores are calculated as: score ( { D ij } | S i | j =1 ) = ( F i ) T ( E ic + E io ) (1) q ( q is set as 5 empirically) documents { D ic,o } with the maximum scores are selected as evidences from each S i .",
"When | F i | is large, we use approximate k -nearest-neighbor algorithms, namely Locality Sensitive Hashing (Datar et al., 2004), to improve efficiency.",
"Learning to Rank o Given c : The overall framework for measuring relevances is shown in Figure 2",
"(a).",
"Given a concept c and stock o , for each data source S i , the local stock representation E io and the local document representations of the q most relevant documents, denoted as { F i c,o } , are sequentially fed into Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to acquire a semantic representation of the evidences.",
"A bidirectional extension (Graves and Schmid-huber, 2005) is applied to capture semantics both left-to-right and right-to-left.",
"As a result, two sequences of hidden states are obtained, i.e. (cid:1) h 1 , (cid:1) h 2 ... (cid:1) h q +1 and (cid:0) h 1 , (cid:0) h 2 ... (cid:0) h q +1 .",
"We concatenate (cid:1) h t and (cid:0) h t at each time step to obtain the final hidden states h 1 , h 2 ...h q +1 .",
"We concatenate all I ic,o with the global concept representation of c (i.e. the average of local representations, E 1 c ...E nc ) and feed the result to a softmax layer to obtain the probability of a stock o being c 's concept stock, denoted as p ( o | c ) .",
"Given a concept c , all stocks are ranked by the probabilities.",
"Given a set of gold-standard concept stock data, supervised learning is conducted to learn p ( o | c ) .",
"The loss function is defined as: E ( c,o,y ) [ log p ( o | c ) y log (1 p ( o | c )) (1 y ) ] (3) Here y is 1 when o is a concept stock of c , and 0 otherwise.",
"Equation 3 maximizes p ( o | c ) ( 1 p ( o | c ) ) when y = 1 ( y = 0 ).",
"AdaGrad (Duchi et al., 2011) is applied to update parameters.",
"The ranking baseline can require large amounts of annotated data to deliver satisfying performance (Shen et al., 2014a,b), which can be costly.",
"In addition, the algorithm has to deal with highly imbalanced datasets, since there are thousands of stocks in a stock market, but only a few are related to a concept c , which greatly harms the performance of discriminatively trained algorithms (Wu and Chang, 2003).",
"We take a different approach, utilizing the same data sources and representations as the ranking baseline.",
"To better leverage a small amount of supervision data, we apply reinforcement learning to expand the query concept c , consulting supporting evidences from the data sources { S i } mi =1 .",
"We leverage embedding similarities as a basis for concept-stock relatedness.",
"The advantage is that embeddings can be trained over large scale raw texts unsupervisedly, without the need for manually labeled stock lists.",
"Embeddings represent similarities between concepts and stocks if they co-exist literally in a context window during embedding training.",
"As a result, irrelevant (relevant) stocks are less (more) similar to the concept c , since they infrequently (frequently) co-occur, which alleviates the problem brought by imbalanced datasets in that irrelevant stocks can be spotted at ease.",
"Global representations of c and o is utilized to obtain a direct relevance score f ( c, o ) : f ( c, o ) = E c E o , (4) where denotes the dot product operation.",
"While f ( c, o ) measures direct relevance between c and o in embedding contexts, we want to find those o 0 that are indirectly relevant to c by reasoning as shown in Figure 1. Query expansion (Kuzi et al., 2016; Diaz et al., 2016) is used to this end.",
"One naive baseline is expanding the concept c with its k -nearest-neighbor concepts, denoted as [ c e ] , from global matrix E measured by cosine similarity.",
"Relevance between the expanded concepts [ c, [ c e ]] and o is calculated as: f ([ c, [ c e ]] , o ) = E [ c, [ c e ]] E o = ( E c + X c e [ c e ] E c e ) E o (5) We define E [ c, [ c e ]] as the addition of E c and each E c e .",
"The baseline is relatively inflexible since a fixed number of k expansion concepts are selected for all c .",
"In contrast, the reasoning procedures shown in Figure 1 can take an arbitrary number of steps.",
"Besides, the naive baseline does not incorporate supervision, thus being unable to decide whether the selected concepts are beneficial for concept stock recommendation.",
"We use reinforcement learning to tackle this issue, directly learning how to expand queries from a few labeled cases.",
"Given c , our method works iteratively, expanding the concept until it expects further expansions are not desired.",
"For each candidate concept to expand c , a decision is made by the model on whether it will improve, worsen or have no effect on recommendation accuracies.",
"Based on these, we model query expansion with a Markov Decision Process (MDP) to discriminatively select expansion concepts for c to maximize recommendation accuracies, while requiring much less training data compared to the ranking baseline.",
"The overall framework is shown in Figure 2",
"(b).",
"Formally, a MDP is a list [ Z, A, T, R ] , where Z = { z } is a set of states, A = { a } is a set of actions, T ( z, a ) is a transition function, which determines the next state z 0 = T ( z, a ) after performing action a on z , and R is a reward function.",
"We describe each in detail below: States : Each state z is a list of lists: z = [[ c, [ c e ]] , [ v 1 , { F 1 context } ] ... [ v n , { F ncontext } ]] , (6) where [ c, [ c e ]] consists of the input concept c and its expansion concept list [ c e ] so far.",
"In the start state, [ c e ] is empty, and thus [ c, [ c e ]] = [ c, [ ]] .",
"[ v i , { F icontext } ] consists of a new candidate concept v i and its supporting evidences { F icontext } .",
"Since globally trained embeddings underperform locally trained embeddings (Diaz et al., 2016) for query expansion, we use local embeddings E i to suggest candidate concepts v i , instead of using global embedding E .",
"v i is obtained by finding the most similar concept of [ c, [ c e ]] from local embeddings E i .",
"{ F icontext } is the document representations of the q most relevant documents to ([ c, [ c e ]] , v i ) as evidences.",
"Formally, { F icontext } is the document representations of the q documents with the maximum scores, score 0 ( { D ij } | S i | j =1 ) = ( F i ) T ( E i [ c, [ c e ]] + E iv i ) .",
"As a result, at each state, we have n candidate concepts v 1 ...v n .",
"The neural agent chooses at most one concept to be added to [ c e ] based on the evidences.",
"Action : The agent can take four types of actions.",
"(1) Add one of n candidate concepts to [ c e ] (2) Reject all n candidate concepts.",
"(3) Remove the last added concept from [ c e ] (4) Stop the process.",
"State Transition : After taking an action a on a state z , a new state z 0 is yielded by the transition function T ( z, a ) .",
"If one of the candidate concepts v i is chosen, v i is added to [ c e ] .",
"In addition, the new state z 0 is obtained by updating [ v 1 , { F 1 context } ] ... [ v n , { F ncontext } ] by finding the most similar concepts of the new [ c, [ c e ] 0 ] and their supporting evidences among the local embeddings.",
"If action (2) is chosen, [ c, [ c e ]] remains, while the v 1 ...v n is replaced with the second most similar concepts of [ c, [ c e ]] among the local embeddings, and the process repeats until action (1), (3) or (4) is chosen.",
"If (3) is chosen, the last added concept is removed from [ c e ] , and [ v 1 , { F 1 context } ] ... [ v n , { F ncontext } ] are updated according to the new [ c, [ c e ] 0 ] .",
"If (4) is chosen, the query expansion process finishes.",
"The final [ c, [ c e ]] is the result of query expansion.",
"Neural Agent : Given a state z , the neural agent chooses one action to take among the four types of actions.",
"To this end, [ c, [ c e ]] and each [ v i , { F icontext } ] are fed into separate Bi-LSTM to obtain a concept representation and candidate concept representations, respectively.",
"We further concatenate these representations and use a linear layer to obtain Q-values Q ( z, a ; ) for each action a (Sutton and Barto, 1998).",
"Note that we do not use softmax to normalize the Q-values since Q-value is the expectation of discounted sums of rewards by definition instead of probabilities.",
"The action with the maximum Q-value is chosen.",
"Reward : A reward r is associated at each step specified by the reward function R , which evaluates the goodness of action a on state z .",
"We use the difference of mean average precision (MAP) (Christopher et al., 2008) before and after an action a as the reward function: R ( z, a, z 0 ) = MAP ( z 0 ) MAP ( z ) , (7) where MAP is defined as: MAP ( z ) = 1 | ( c ) | X o ( c ) Precision @ rank ( o ; z, E ) (8) and Precision @ K = P o 0 ( c ) 1 ( rank ( o 0 ; z, E ) K ) K (9) ( c ) is the set of concept stocks of the concept c in training data.",
"rank ( o ; z, E ) is the rank of the stock o , which is calculated by utilizing [ c, [ c e ]] of z and global embedding E to rank all stocks using Equation 5.",
"1 is the indicator function.",
"Therefore, MAP measures the goodness of the ranking, which is large if the stocks in ( c ) are ranked higher compared to the others.",
"Reward r is positive if [ c, [ c e ] 0 ] of z 0 can rank stocks better compared to [ c, [ c e ]] of z and negative otherwise.",
"We choose MAP based on two reasons: (1) MAP provides a measure of quality, which has been shown to have good discrimination and stability.",
"Besides, MAP is roughly the average area under the precision-recall curve for a set of queries (Christopher et al., 2008).",
"Thus, optimizing MAP can indirectly improve both precision and recall.",
"(2) MAP provides smoother scores than other metrics such as Precision@K and Recall@K.",
"In summary, at each step, the MDP framework chooses an action a based on a state z , obtaining a 2107 Algorithm 1 Training Phase of MDP for Query Expansion 1: Initialize experience memory M 2: Initialize action network with random weights 3: Initialize target network with weights target 4: for episode from 1 to N do 5: for each concept c do 6: Obtain start state z get state ( [ c, [ ]] , E 1 ...E n ) 7: while true do 8: if random () < (cid:15) then 9: Select a random action a 10: else 11: Send state z to neural agent 12: Obtain action a from action network 13: end if 14: Obtain new state z 0 T ( z, a ) 15: Calculate reward, r R ( z, a, z 0 ) 16: Store sample ( z, a, z 0 , r ) to M 17: Update state z z 0 18: Sample mini-batch ( z t , a t , z 0 t , r t ) from M 19: Calculate sample estimate using Equation 11 20: Perform a batch gradient descent step on the action network, updating parameters using Equation 12 21: Update target at every C steps.",
"new state z 0 and a reward r , which forms a sample, ( z, a, z 0 , r ) .",
"The process repeats until action (4) is chosen.",
"We adopt Q-learning (Sutton and Barto, 1998) to optimize the neural agent, which uses a function Q ( z, a ) to represent Q-values and the recursive Bellman equation to perform Q-value iteration, when observing a new sample ( z, a, z 0 , r ) .",
"Since the state space Z can be extremely large in practice, we represent the Q-value function Q ( z, a ) with a neural agent shown in Figure 2",
"(b) named the action network parametrized by (Mnih et al., 2015).",
"The deep Q-learning method has the ability to capture nonlinear features and achieve better performance compared with traditional methods (Narasimhan et al., 2015).",
"Formally, Q ( z, a ) = Q ( z, a ; ) (10) To improve learning stability, sample reward estimates are obtained from a separate target network with the same architecture as the action network (Mnih et al., 2015), parametrized by target .",
"Formally, the sample reward estimate of ( z, a, z 0 , r ) is: y 0 = ( r if a = action (4) r + max anew AQ ( z 0 ,a new ; target ) otherwise (11) Note that if the action (4) is taken, y 0 = r since the process stops at state z and no further action will be taken so that the sum of rewards is r .",
"To learn the model parameters , the action network outputs Q ( z, a ; ) should be close to sample estimates obtained from target network.",
"Thus, we introduce an experience memory M to save history samples and select a mini-batch of samples according to a uniform distribution.",
"We use the mean square error as the loss function: E ( z,a,z 0 ,r ) U ( M ) [( Q ( z, a ; ) y 0 ) 2 ] (12) The training phase is shown in Algorithm 1. In lines 8 12 , we use (cid:15) -greedy exploration, which encourages the agent to explore unknown state space (Sutton and Barto, 1998).",
"We construct two datasets from the Chinese websites, Jinrongjie 2 and Tonghuashun 3 , respectively, which are two mass medias for China stock markets.",
"These two websites periodically publish their concept stock lists, which are manually collected and analyzed by their financial professionals.",
"We observe high quote change correlations of the stocks of each concept c and their lists are commonly used by investors to select stocks, which confirms the credibility of these datasets.",
"The Jinrongjie dataset consists of 206 concepts and each concept has an average of 25 .",
"4 manually suggested concept stocks.",
"For the Tonghuashun dataset, there are 900 concepts and 15 .",
"6 manually suggested concept stocks on average.",
"There are two main stock exchanges in China, the Shanghai Stock Exchange 4 and the Shenzhen Stock Exchange 5 .",
"We crawled stock lists from their official websites, with 3326 stocks in total.",
"We utilize four public data sources, S 1 to S 4 , the statistics of which are shown in Table 1. 2 http://stock.jrj.com.cn/concept/ 3 http://stock.10jqka.com.cn/ gngyw_list/ 4 http://www.sse.com.cn/ assortment/stock/list/share/ 5 http://www.szse.cn/ 2108 Source # Docs Avg # Words News 255,318 2753 Report 12, 431 19,145 Wikipedia 2,143 3745 Search Engine 6,130 1846 Table 1: Data Source statistics S 1 : News is crawled from Sina Finance News 6 , which originates from 2009 to 2017 .",
"S 2 : Reports consists of annual and quarterly company reports crawled from Sina Finance 7 .",
"S 3 : Wikipedia includes relevant wikipedia pages of the concepts and stocks, if any, which can provide some background knowledge.",
"S 4 : Search Engine includes open-domain information for the concepts and stocks obtained using Bing API 8 .",
"We adopt search engine results for representing heterogeneous web texts.",
"The top-ranked webpages are crawled.",
"Given the Jinrongjie and Tonghuashun datasets, we randomly select 70%, 10% and 20% of the concepts as training, development and testing sets, respectively.",
"We compare our method with four baselines: Search is a naive information retrieval baseline, which sends the concept c and each stock o to an inverted index and obtains a list of topk ranked documents ( k = 5 in experiments) by a fixed ranking metric, Ocapi BM25 (Robertson et al., 2009).",
"The stocks are ranked by the average of topk doc-uments' BM25 scores.",
"Rank is our ranking baseline.",
"Five top-ranked documents from each source are fed into the model.",
"All 3326 stocks are ranked for each concept.",
"Semantics ranks the stocks using Equation 4, which is the naive semantic relatedness f ( c, o ) .",
"Semantics+ extends Semantics by including 8 most similar words to expand original concepts.",
"Semantics++ extends Semantics by including the most similar words with similarities larger than 0.65 to expand original concepts.",
"On average, 6.3 concepts are included.",
"6 http://finance.sina.com.cn/ 7 http://finance.sina.com.cn/focus/ ssgsnb2016/ 8 https://azure.microsoft.com/ en-us/services/cognitive-services/bing-web-search-api/ Jinrongjie Method P @5 P @10 R @30 MAP Search 0.402 0.315 0.338 0.296 Semantics 0.45 0.367 0.380 0.332 Semantics+ 0.471 0.370 0.391 0.352 Semantics++ 0.478 0.375 0.396 0.359 Rank 0.467 0.376 0.402 0.365 RL 0.524 * 0.427 * 0.428 * 0.398 * Tonghuashun Method P @5 P @10 R @30 MAP Search 0.387 0.302 0.315 0.278 Semantics 0.437 0.347 0.360 0.327 Semantics+ 0.448 0.356 0.374 0.345 Semantics++ 0.453 0.362 0.380 0.351 Rank 0.458 0.373 0.381 0.356 RL 0.507 * 0.402 0.422 * 0.378 * Table 2: Concept stock recommendation results on Jinrongjie and Tonghuashun.",
"Also, the experience memory size is set to 50 , 000 and older training samples are abandoned.",
"The (cid:15) value is set as 1 at the start and gradually decreases to 0 .",
"1 after 3000 annealing steps.",
"We perform a training phase after every 3 decision steps.",
"The mini-batch size is set to 50 .",
"Dropout is applied to avoid overfitting and the dropout rate is 0 .",
"5 .",
"We set the learning rate for AdaGrad as 0 .",
"01 .",
"Gradient clipping (Pascanu et al., 2013) is adopted to prevent gradient exploding and vanishing during training process.",
"We use four metrics, mean average precision (MAP), precision at 5 and 10 ( P @5 , P @10 ) and recall at 30 ( R @30 ) to evaluate the algorithms.",
"The results are shown in Table 2. From Table 2, the first observation is that RL outperforms the baselines on both datasets, which demonstrates the effectiveness of combining semantic relatedness with query expansion based on reinforcement learning.",
"The baseline Rank achieves the second best results.",
"The large gap between RL and Rank indicates that RL is much easier to train compared to Rank on small data.",
"Second, we observe that Semantics+ improves over Semantics , which shows that query expansion has the potentials to alleviate concept ambiguities and benefit concept stock recommendation.",
"Semantics++ can outperform Semantics+ by considering semantic similarities.",
"Also, compared to 2109 100 200 300 400 500 600 700 800 # training concept 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 MAP Rank RL Figure 3: Learning Curve 200 300 400 500 600 700 800 # testing concept 50 100 150 200 250 300 t i m e ( s ) RankRLSemantics+SemanticsSearch Figure 4: Efficiency RL , we conclude that query expansion based on reinforcement learning could better utilize training data and significantly outperform naive query expansion methods.",
"The last observation is that Search performs the worst among the methods.",
"This sheds light on the limitations of traditional search models and confirms the effectiveness of semantic modeling by word embedding and neural models.",
"We increase the amount of training concepts and study whether RL is easier to train than Rank .",
"The results on Tonghuashun is shown in Figure 3 (sim-ilar patterns are demonstrated using Jinrongjie).",
"With more training concepts, the MAPs of both methods increase.",
"However, RL consistently outperforms Rank and the margin becomes larger.",
"Thusly, we conclude that RL requires less data than Rank to achieve similar performance.",
"Figure 4 shows the efficiency of all algorithms on testing data.",
"The three unsupervised algorithm Search , Semantics and Semantics+ are more efficient compared to the supervised algorithm, Rank and RL .",
"RL is more efficient compared to Rank , since Rank has to rank every stock to obtain concept stocks.",
"Data Sources Effectiveness : To study the effectiveness of data sources, we count how many concepts are chosen from each data source during query expansion.",
"For the Tonghuashun test data (similar tendencies are observed for Jinrongjie), 761 , 689 , 199 , 344 concepts are selected for S 1 S 4 , respectively.",
"Accumulated rewards of these concepts for S 1 S 4 are 76 .",
"13 , 61 .",
"32 , 7 .",
"49 and 14 .",
"10 , respectively.",
"We conclude that News and Reports are relatively more effective for improving recommendation accuracies.",
"Recommended Stocks : To obtain a better understanding of our method, we examine the symbols of the top5 selected stocks of concepts and some examples are shown in Table 3. We notice that RL can effectively extend concepts with relevant concepts.",
"For example, the algorithm extends (Sino) with (China) and (State-owned enterprises), (Tesla) with (into China), (Elec-tric cars) and (Musk) and (In-telligent Logistics) with (Logistics), CSN (China Smart Logistic Network), (Ware-housing) and (Delivery), which results in more accurate concept stocks.",
"For (Tesla), RL made two mistakes due to rumor and ambiguity.",
"For example, SHLG is chosen because of rumors that Tesla will establish a new factory there.",
"WXQC is mistakenly chosen because WXQC is called China's Tesla in some news due to its investments in electric cars.",
"In contract, Semantics+ and Rank are limited by lack of su-2110 pervision and highly unbalanced datasets, respectively.",
"For example, Rank mistakenly chooses MLDQ in that it confuses (Smart Appliances) with (Intelligent Logistics).",
"We conclude that RL is capable of expanding concepts with relevant concepts that helps find more revelant stocks.",
"We have investigated a reinforcement learning method to automatically mine evidences from large-scale text data for measuring the correlation between a concept and a list of stocks.",
"Compared to standard information retrieval methods, our method leverages a small amount of training data for obtaining a flexible strategy of query expansion, thus being able to disambiguate contexts in exploration.",
"Results on two Chinese datasets show that our method is highly competitive for our task, thus providing a tool for investors to gain understandings of emerging markets.",
"We thank the anonymous reviewers for their insightful comments.",
"We would like to thank Yumin Zhou for her insightful discussion and assisting coding.",
"Yue Zhang is the corresponding author."
] | [
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"method",
"other",
"other",
"objective",
"other",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"result",
"other",
"other",
"other"
] |
[
"Prior work has explored directly regularizing the output distributions of probabilistic models to alleviate peaky (i.e. over-confident) predictions, a common sign of overfitting.",
"This class of techniques, of which label smoothing is one, has a connection to entropy regularization.",
"Despite the consistent success of label smoothing across architectures and datasets in language generation tasks, two problems remain open: (1) there is little understanding of the underlying effects entropy regularizers have on models, and (2) the full space of entropy regularization techniques is largely unexplored.",
"We introduce a parametric family of entropy regularizers, which includes label smoothing as a special case, and use it to gain a better understanding of the relationship between the entropy of a trained model and its performance on language generation tasks.",
"We also find that variance in model performance can be explained largely by the resulting entropy of the model.",
"Lastly, we find that label smoothing provably does not allow for sparse distributions, an undesirable property for language generation models, and therefore advise the use of other entropy regularization methods in its place.",
"Our code is available online at https://github.com/ rycolab/entropyRegularization .",
"When training large neural networks with millions of parameters, regularization of some form is needed to prevent overfitting, even when large amounts of data are used; models for language generation are no exception.",
"In probabilistic modeling, e.g. when the final layer of the neural network is a softmax, overfitting often manifests itself in overconfident placement of most of the probability mass on a few candidates, resulting in peaky (low-entropy) probability distributions over the vocabulary.",
"Specifically for language generation tasks, this behavior leads to the output of repetitive or frequently occurring but unrelated text, which is detrimental to the generalization abilities of the model (Chorowski and Jaitly, 2017; Holtz-man et al., 2020).",
"A natural regularizer to consider is, therefore, one that penalizes overconfidence, encouraging higher entropy in the learned distribution.",
"Indeed, the literature has ascribed gains of 1 BLEU point in machine translation to label smoothing, one such technique (Chen et al., 2018).",
"Despite the clear relationship between low entropy and overfitting, only a handful of distinct entropy regularizers have been explored.",
"To fill this gap, we introduce generalized entropy regularization (GER), a unified framework for understanding and exploring a broad range of entropy-inducing regularizers.",
"GER is based on the skew-Jensen family of divergences J ,G (Nielsen and Boltz, 2011) and thus may be generalized to any Bregman divergence through the choice of generator function G .",
"For the negative entropy generator function, GER recovers label smoothing (Szegedy et al., 2015) as 1 , and the confidence penalty (Pereyra et al., 2017) as 0 .",
"We provide formal properties of GER in 3, proving these special-case equivalences among other characteristics of GER.",
"We then use GER to examine the relationship between entropy and the evaluation metrics in two language generation tasks: neural machine translation (NMT) and abstractive summarization.",
"GER encompasses a large family of regularizers, which allows us to directly compare label smoothing to other forms of entropy regularization.",
"By studying the relationship between different regularizers on the performance of natural language generation (NLG) systems, we can better understand not just when but also why label smoothing aids language generation tasks.",
"Through our analysis, we gain the following insights:",
"(i) With tuning of the regularizer's coefficient, any choice of can yield similar performance, i.e. there is nothing special about label smoothing.",
"In fact, our results suggest that label smoothing ( 1 ) makes it more difficult to tune the regularizer's coefficient.",
"(ii) Label smoothing assigns infinite cost to sparse output distributions, which may be an undesirable behavior for language generation tasks.",
"(iii) There is a strong (quadratic) relationship between a model's performance on the evaluation metric and its (average) entropy, offering a hint as to why these regularizers are so effective for NLG.",
"In summary, entropy-inducing regularizers are a boon to probabilistic NLG systems, which bene-fit from higher entropy output distributions.",
"Label smoothing works because it forces the model towards a higher-entropy solution, but we recommend the confidence penalty and other entropy regularizers ( < 1 ) for reasons",
"(i) and",
"(ii) above.",
"In this work, we consider conditional probability models p ( y | x ) for natural language generation; such models assign probability to a target sequence y Y given a source sequence x .",
"Specifically, our target sequence y = (cid:104) y 1 , . . . , y n (cid:105) of arbitrary length n is a sequence of target words 1 y i from our vocabulary Y .",
"The set of all complete target sequences, which are padded with distinguished beginningand end-of-sentence symbols, BOS and EOS , is then defined as Y := { BOS y EOS | y Y } .",
"For language generation tasks, p ( y | x ) is typically a neural network with parameters ; this network is often trained to approximate p ( y | x ) , the empirical distribution (i.e. the distribution of the data).",
"Here, we focus on locally normalized models; in such models p ( y | x ) is factored as: p ( y | x ) = p ( y 1 | x ) p ( y n | x , y <n ) (1) where p ( y i | x , y <i ) is defined by a softmax over the output of the final fully connected layer of the network.",
"Generation is performed using greedy search, beam search or a sampling scheme.",
"Of the candidate sequences generated, the one with the highest probability under the model p is returned as the model's prediction.",
"One way of selecting the parameters is to minimize the KL-divergence between the empirical 1 Targets y i may also be characters or subwords; our experiments use byte-pair encoding (Sennrich et al., 2016) distribution and the model.",
"However, fitting a model that perfectly approximates the empirical distribution is, in general, fraught with problems (Hastie et al., 2001).",
"The goal of learning is to generalize beyond the observed data.",
"Exactly fitting the empirical distribution, often termed overfitting, is therefore not an ideal goal and for language generation models specifically, does not go hand-in-hand with the ability of a model to generate desirable text (Bengio et al., 2015).",
"Consequently, it is advisable to minimize a regularized objective to prevent overfitting: L ( ) + R ( ) (4) where R ( ) is a regularizer defined over the model with strength coefficient > 0 .",
"Overfitting can manifest itself as peakiness in p (Williams and Peng, 1991; Mnih et al., 2016; Pereyra et al., 2017).",
"In other words, p overcon-fidently places most of the probability mass on very few candidates.",
"While this overconfidence improves training loss, it hurts generalization.",
"Entropy regularization is one technique that directly combats such overconfidence by encouraging more entropic (less peaky) distributions.",
"where we remove dependence on x for notational simplicity.",
"However, the sum in eq.",
"(5) over Y generally renders its computation intractable.",
"3 Instead, regularization is performed on the conditional distribution over Y { EOS } at each time step, which can be interpreted as an approximation of the true model entropy.",
"For ease of notation, we define a higher-order function D f over our training corpus C consisting of (cid:104) x , y (cid:105) pairs that maps a function f 2 H( p, q ) := (cid:80) z Z p ( z ) log q ( z ) is cross-entropy and H( p ) := H( p, p ) = (cid:80) z Z p ( z ) log p ( z ) is the Shannon entropy, for which log = log 2 and Z = supp( p ) .",
"3 The notation used by Pereyra et al. (2017) is imprecise.",
"over distributions p, q as follows below: D f ( p || q ) = (6) (cid:88) (cid:104) x , y (cid:105)C | y | (cid:88) t =1 f ( p ( | x , y <t ) || q ( | x , y <t )) The function D f allows us to describe in notation how entropy regularization is typically employed in the training of language generation systems.",
"4 Label Smoothing.",
"Label smoothing, first introduced as a regularizer for neural networks by Szegedy et al. (2015), is so named because the technique smooths hard target distributions.",
"One such distribution, the empirical distribution, is encoded as a set of one-hot vectors (hard targets) where for each data point, the correct label (e.g., vocabulary index of a word) has value 1 and all other labels have value 0 .",
"Label smoothing with strength coefficient is an add smoothing scheme on the distribution over labels at every time step.",
"Interestingly, minimizing the cross entropy between this modified distribution and the model p is equivalent to adding the weighted KL divergence between the uniform distribution and the model p in our original objective function with the same strength coefficient: L ( ) LS := (1 ) L ( ) + DKL ( u || p ) (7) While the loss function is often scaled as above, it is nonetheless equivalent to L ( ) LS = L ( ) + DKL ( u || p ) ; 5 we use this form for consistency.",
"Confidence Penalty.",
"The confidence penalty, empirically explored in the supervised learning setting by Pereyra et al. (2017), aims to penalize a low-entropy model.",
"This is done by subtracting a weighted term for the entropy of the model's pre-4 Note that the standard loss function in eq.",
"diction p ( ) from the loss function, thereby encouraging a more entropic model.",
"This is equivalent to adding the KL divergence between the model p and the uniform distribution: L ( ) CP := L ( ) + DKL ( p || u ) (8) While Pereyra et al. (2017) found that label smoothing performed better than the confidence penalty for NMT, they only searched coarsely over a small range of 's for both regularizers.",
"Our findings in 4 suggest an alternate conclusion.",
"The positive effect of both label smoothing and the confidence penalty on model performance in language generation tasks motivates further exploration of entropy-promoting regularizers.",
"To this end, we construct a parameterized family of regularizers with label smoothing and the confidence penalty as special cases.",
"We discuss the formal properties of a subset of this family, providing upper and lower bounds for it.",
"We show divergence only occurs in one case for this subset ( 1 ), which directly implies that no sparse solution exists when label smoothing is used as a regularizer.",
"We derive a family of regularizers from the skew-Jensen divergence J ,G (Nielsen and Boltz, 2011), which is defined below as:",
"for a strictly convex generator function G : R and (0 , 1) where is a closed convex set.",
"In this paper, we restrict to be the ( | Y | +1) -simplex.",
"Note that J ,G ( q || p ) (cid:54) = J ,G ( p || q ) in general, although this is true for some choices of G and .",
"We define the generalized entropy regularizer as R ( ) = DJ ,G ( u || p ) where u is the uniform Figure 1: Different divergence measures between u , the uniform distribution and p , a probability distribution over a Bernoulli random variable X .",
"distribution.",
"6 These regularizers promote entropy because they push the model p towards u , which is the maximum-entropy distribution with an entropy of log( | Y | +1) .",
"Throughout the rest of this paper, we primarily use the generator function 7 G ( p ) = H( p ) .",
"We use J as shorthand for J , H .",
"We note J is equivalent to quadruple the Jensen Shannon (JS) divergence and asymptotically approaches the KullbackLeibler (KL) divergence for certain values of .",
"Specifically, we have: lim 0 J ( q || p ) = KL( p || q ) (10) lim 1 J ( q || p ) = KL( q || p ) (11) J 1 / 2 ( q || p ) = 4 JS( q || p ) (12) We prove these relationships in App.",
"A and App.",
"B. For ease, we define J 1 := lim 1 J and J 0 := lim 0 J .",
"We note the following two equivalences for these special cases.",
"Proposition 1. J 1 ( u || p ) = H( q, p ) .",
"In words, the gradient of the loss with GER as 1 is equivalent to the gradient of the loss augmented with label smoothing .",
"Proposition 2. J 0 ( u || p ) = H( p ) .",
"In words, the gradient of the loss with GER as 0 is equivalent to the gradient of the loss augmented with the confidence penalty .",
"See App.",
"C and App.",
"D for proofs.",
"When fitting a model p , we generally optimize the inclusive KL , i.e. KL( p || p ) , so that, among other reasons, p has support everywhere that p has support.",
"However, it is unclear what relationships we want to encourage between the model p and the uniform distribution u during regularization as complete support of u implies no word can ever have non-zero probability.",
"Here we explore formal properties of J as a regularizer to gain insight into how, as a function of , these regularizers affect the learned distribution.",
"Magnitude.",
"Figure 1 shows the different divergence measures between u and p .",
"We see that J 1 = KL( u || p ) (label smoothing) is much larger than J 0 = KL( p || u ) (confidence penalty) at values of p farther from u .",
"This indicates that J 1 would be a stronger regularizer than J < 1 , i.e. penalize values of p far from u more heavily, given the same strength coefficient .",
"Note that it is not always the case that J < 1 ( u || p ) J 1 ( u || p ) for fixed p .",
"We can, however, bound J from above and below by other quantities.",
"Sparsity.",
"Sparsity is generally a desirable trait in probabilistic models; specifically for structured prediction, it leads to improvements in performance and interpretability (Martins et al., 2011; Niculae WMT'14 De-En IWSLT'14 De-En MTTT Fr-En H BLEU H BLEU H BLEU No Regularization 0 0.11 31.1 0 0.1 35.7 0 0.15 35.2 Label Smoothing DJ 1 ( =0 . 1 ) 1 0.11 0.23 31.3 +0.2 1 0.11 0.18 36.9 +1.2 1 0.11 0.18 36.5 +0.8 Label Smoothing DJ 1 1 0.35 0.38 31.7 +0.6 1 0.50 0.40 37.2 +1.5 1 0.693 0.47 37.5 +2.3 Confidence Penalty DJ 0 0 0.28 0.55 31.6 +0.5 0 0.76 0.81 37.5 +1.8 0 0.95 0.86 37.4 +2.2 GER DJ 0.7 0.65 0.47 32.0 +0.9 0.5 1.00 0.56 37.5 +1.8 0.85 0.52 0.37 37.6 +2.4 Table 2: BLEU scores and normalized entropy H( p ) on the test sets for WMT'14 De-En, WMT'14 De-En, and MTTT Fr-En.",
"For example, Martins and Astudillo (2016) showed the benefits of using sparsemax, which induces sparsity in an output distribution or attention layer, for natural language inference tasks.",
"There are also intuitive reasons for allowing p to be sparse.",
"Part of modeling language generations tasks is learning when particular sequences cannot , or at least should not, occur (e.g. are grammatically or syntactically incorrect).",
"In these cases, a model should be able to assign 0 probability mass to that sequence.",
"However, there is no sparse optimal solution p when using label smoothing as the label smoothing loss function becomes divergent if p does not assign probability mass y supp ( u ) .",
"Proposition 5. J ( u || p ) is finite for any p and any < 1 .",
"As 1 , J ( u || p ) diverges iff y supp( u ) for which p ( y ) = 0 .",
"See App.",
"F for a proof.",
"We evaluate our family of entropy regularizers on two language generation tasks: machine translation and abstractive summarization.",
"We then analyze trends in model performance as a function of and model entropy 9 and explore how this entropy affects other properties of language generation models.",
"In the following experiments, each model is trained using eq.",
"(4) where R ( ) = DJ ( p || p ) .",
"We conduct searches over and using Bayesian optimization (Snoek et al., 2012) to find the combination of regularizer DJ and strength coefficient 8 We have 1 as an exception; the standard deviation is slightly higher for larger values of .",
"9 Model entropy is estimated as an average of the entropies of distributions at each time step during decoding, i.e. H( p ) = DH ( p ) .",
"Entropy is normalized by the maximum possible entropy for the given vocabulary size ( log | Y | ) in all figures and tables to control for the fact that languages have vocabularies of different sizes.",
"that lead to the lowest loss on the development set for the respective task.",
"10 We additionally do a more fine-grained grid search over for J 0 (confidence penalty) and J 1 (label smoothing) for completeness.",
"All other model hyperparameters are held constant.",
"We run experiments on multiple architectures and across several data sets to ensure trends are general.",
"We explore performance of the regularizer DJ on NMT systems using three language pairs and corpora of two different sizes on the following tasks: WMT'14 German-to-English (De-En) (Bojar et al., 2014), IWSLT'14 German-to-English (De-En) (Cettolo et al., 2012), and Multitarget TED Talks Task (MTTT) French-to-English (FrEn) and Japanese-to-English (Ja-En) tasks (Duh, 2018).",
"For the larger WMT data set, we train fewer models using coarser-grained and ranges.",
"We perform experiments for both Transformers (Vaswani et al., 2017) and convolutional sequence-to-sequence models (Gehring et al., 2017).",
"For reproducibility and comparability, we use the data pre-processing scripts provided by fairseq (Ott et al., 2019) and follow recommended hyperparameter settings from previous work (Vaswani et al., 2017; Gehring et al., 2017) for baseline models.",
"We use SacreBLEU (Post, 2018) to calculate BLEU scores (Papineni et al., 2002).",
"Specific data pre-processing steps and model hyperparameter details are provided in App.",
"G. Decoding is performed with length-normalized beam search with a beam size of 5 unless otherwise stated.",
"Early stopping was used during training; model parame-10 We only report results with generator function G = H as results using G ( z ) = || z || 22 were consistently worse and often did not improve on the baseline; these results may be seen in App.",
"H. Figure 3: Model entropy H( p ) vs. BLEU on IWSLT'14 German to English (De-En) and Multitarget TED Talks Task French to English (Fr-En) using a Transformer architecture; each point is a fully trained model, regularized with DJ for varying and .",
"ters were taken from the checkpoint with the best validation set BLEU .",
"Results of our experiments are shown in Table 2 and Figure 3. We see the same relation between model entropy and BLEU with both Transformer and convolutional architectures and between different language pairs.",
"We show results for the Transformer architectures inline as they are the current standard for many NLP tasks; results for convolutional architectures are in App.",
"H. Our results show better performance is achieved with values of and other than those that correspond to label smoothing with = 0 .",
"1 , which is the commonly used value for the strength coefficient (Vaswani et al., 2017; Edunov et al., 2018).",
"Moreover, the relationship between model entropy and evaluation performance is strong, following the same trend for all values of , which suggests tuning a model for a specific entropy rather than , may be a better method in practice.",
"We discuss trends in 4.3.",
"We fine-tune BART (Lewis et al., 2019) on the CNN/DailyMail abstractive summarization task",
"(Hermann et al., 2015) with regularizer DJ .",
"Data pre-processing and other hyperparameter settings follow Lewis et al. (2019).",
"Results in Table 3 show that optimal values of ROUGE-L (Lin, 2004), the evaluation metric, can be achieved by regularizing with DJ for different values of .",
"Notably, the entropy is virtually the same for the models that achieve top performance, demonstrating the closer relationship of performance with model entropy than with , discussed further in 4.3.",
"We look at the strength of the relationship between the evaluation metrics and both and the model's entropy.",
"Figure 3 shows a quadratic relationship between model entropy and BLEU .",
"On the other hand, the relationship between (coloring of points) and BLEU is not an obvious one; the best performing models are regularized with various values of .",
"As correlation only tells us about linear relationships, we report mutual information to measure the strength of the relationship between , model entropy, and BLEU .",
"Mutual information shows the proportion of entropy of a variable that is ex-plained by another and is often used as a generalized correlation measure i.e. for nonlinear relationships (Song et al., 2012).",
"We see in Figure 4 that model entropy has a much stronger relationship with BLEU than .",
"Indeed, the normalized mutual information (NMI) between and BLEU is 0 .",
"05 compared to 0 .",
"25 between model entropy and BLEU implying that any flavor of entropy regularization can lead to similar performance.",
"While the relationship between and BLEU is Figure 4: Entropy H( ) , Conditional Entropy H( | ) and Mutual Information I( ; ) for BLEU with alpha ( ) and model entropy, respectively.",
"weak, it is still statistically significant.",
"Some evidence for this exists in Figure 3 where a closer examination reveals that each level of has a similar quadratic trend, albeit with a different offset.",
"Specifically, the performance of models trained with DJ for [0 . 75 , 1] (which includes label smoothing) starts to degrade at lower levels of entropy than models trained with DJ for [0 , 0 . 25] (confidence penalty).",
"As quantitative validation of this observation, we",
"(i) run a conditional independence test to see whether BLEU and are conditionally independent given model entropy and",
"(ii) look at the range of for which DJ leads to good performance for different .",
"Conditional Independence.",
"If and BLEU are conditionally independent it implies that the value of does not supply any additional information about the value BLEU given model entropy, i.e. does not matter when using the regularizer DJ .",
"We use a Monte Carlo permutation test where the null hypothesis is that no relationship between and BLEU exists.",
"11 However, this test rejects the null hypothesis with p -value < 0 .",
"05 , supporting the alternate hypothesis that and BLEU are not conditionally independent.",
"Tuning .",
"On the tasks for which we trained > 60 models, we take the subset of models for which performance is within 1% ( < 0 . 4 BLEU ) of the best overall model.",
"We then look at the range of used with the regularizer DJ for these models.",
"The range of that meets the above criterion is 11 The underlying distributions of random variables are assumed to be Gaussian.",
"much larger for close to 0 than for for close to 1 (see Figure 5).",
"We contend this implies that DJ is easier to tune (i.e. it is more robust) for 0 while for 1 , DJ is relatively sensitive to .",
"We take a subset of models trained with regularizers DJ 0 and DJ 1 and examine the sparsity of p .",
"Results in Table 4 support our formal analysis regarding the sparsity of DJ 0 and DJ 1 in 3.2; DJ 1 steeply penalizes sparsity while DJ for < 1 allows words to be assigned probability 0 .",
"We look at how the probability (under p ) of the reference sequence on the test set changes with model entropy.",
"While higher entropy in models trends positively with downstream evaluation metrics (Figure 3), experiments show they often lead to lower log-likelihood of the reference sequence.",
"Both of these observations have been made for models trained with label smoothing in previous works (Ott et al., 2018; Muller et al., 2019).",
"However, log-likelihood alone does not tell a complete story.",
"During decoding, we search for the Figure 6: Average ranking in p of words in the reference sequence on the test set for IWSLT '14 (De-En) plotted against model entropy."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"objective",
"result",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Recent studies in dialogue state tracking (DST) leverage historical information to determine states which are generally represented as slot-value pairs.",
"However, most of them have limitations to efficiently exploit relevant context due to the lack of a powerful mechanism for modeling interactions between the slot and the dialogue history.",
"Besides, existing methods usually ignore the slot imbalance problem and treat all slots indiscriminately, which limits the learning of hard slots and eventually hurts overall performance.",
"In this paper, we propose to enhance the DST through employing a contextual hierarchical attention network to not only discern relevant information at both word level and turn level but also learn contextual representations.",
"We further propose an adaptive objective to alleviate the slot imbalance problem by dynamically adjust weights of different slots during training.",
"RETRACTED This paper was retracted.",
"Experimental results show that our approach reaches 52.68% and 58.55% joint accuracy on MultiWOZ 2.0 and MultiWOZ 2.1 datasets respectively and achieves new state-of-the-art performance with considerable improvements (+1.24% and +5.98%).",
"1 1 Introduction Recently, task-oriented dialogue systems have attracted increasing attention in both industry and academia due to their broad application for helping users accomplish tasks through spoken interactions (Young, 2002; Young et al., 2013; Gao et al., 2019a).",
"Dialogue state tracking (DST) is an essential part of dialogue management in task-oriented dialogue systems.",
"Given current utterances and dialogue history, DST aims to determine the set of Joint work with Pattern Recognition Center, WeChat AI, Tencent Inc.",
"User: Hello, I'm looking for a resraurant with fair prices.",
"State: price range=moderate Sys: OK.",
"There are Golden Wok Chinese restaurant and Nirala which serves Indian food, which one do you like?",
"User: Are they both have a reasonable price ?",
"State: price range=moderate Sys: Of course.",
"User: Please tell me the address of Golden Wok.",
"State: price range=moderate; food=chinese Table 1: An example dialogue.",
"At the last turn, it is necessary to capture relevant information in dialogue history to correctly predict the value of slot food , which is underlined.",
"User and Sys represent user utterance and system response respectively, and the italic text means dialogue states.",
"As Table 1 shows, the dialogue state is usually dependent on relevant context in the dialogue history, which is proven in previous studies (Sharma et al., 2019; Wu et al., 2019).",
"However, traditional DST models usually determine dialogue states by considering only utterances at current turn (Hen-derson et al., 2014b; Mrksic et al., 2017; Zhong et al., 2018; Chao and Lane, 2019) which neglects the use of dialogue history.",
"Recent researches attempt to address this problem through introducing historical dialogue information into the prediction of slot-value pairs.",
"Most of them leverage a naive attention between slots and concatenated historical utterances (Wu et al., 2019; Zhou and Small, 2019; Gao et al., 2019b; Zhang et al., 2019; Le et al., 2020a,b) or only utilize partial history (Ren et al., 2019; Kim et al., 2019; Sharma et al., 2019) or lack direct interactions between slots and history (Ren et al., 2018; Lee et al., 2019; Goel et al., 2019).",
"Briefly, these methods are deficient in exploiting relevant context from dialogue history.",
"Furthermore, there are differences in the frequency of different slots and different slot-value pairs.",
"For example, in MultiWOZ 2.0 train set, there are 15384 samples related to the slot train-day while 5843 for the slot attraction-name ; the slot-value pair ( attraction-area , center ) occurs 5432 times and ( taxi-departure , royal spice ) occurs only 9 times; etc.",
"We refer to this problem as slot imbalance, which makes the learning difficulties of different slots varies (Refer to Appendix for de-tails).",
"However, existing approaches usually ignore the slot imbalance problem and treat all slots indiscriminately, which limits the learning of those hard slots and eventually damages the overall performance.",
"To address the two aforementioned problems, we propose an effective model equipped with a c ontextual h ierarchical a ttention n etwork (CHAN) to fully exploit relevant context from dialogue history, and an adaptive objective to alleviate the slot imbalance problem.",
"In CHAN, the slot firstly retrieves word-level relevant information from utterances at each turn.",
"Then, these word-level relevant information will be encoded into contextual representations by rich interactions.",
"Finally, the slot aggregates all contextual representations into turn-level relevant information and then we combine it with word-level relevant information to obtain the outputs.",
"To further enhance the ability to exploit relevant context, we employ a state transition prediction task to assist DST learning.",
"For the slot imbalance problem, our adaptive objective can dynamically evaluate the difficulties in an accuracy-sensitive manner and then adaptively adjust the learning weights for different slots.",
"Thus, it can balance the learning of all slots as far as possible.",
"RETRACTED This paper was retracted.",
"We evaluate the effectiveness of our model on MultiWOZ 2.0 and MultiWOZ 2.1 datasets.",
"Experimental results show that our model reaches 52.68% and 58.55% joint accuracy, outperforming previous state-of-the-art by +1.24% and +5.98% , respectively.",
"The ablation study also demonstrates each module's effectiveness in our model.",
"Our contributions are as follows: We propose an effective contextual hierarchical attention network to fully exploit relevant context from dialogue history and employ a state transition prediction task to further enhance it.",
"We design an adaptive objective to address the slot imbalance problem by dynamically adjusting the weight of each slot.",
"To the best of our knowledge, our method is the first to address the slot imbalance problem in DST.",
"Experimental results show that our model achieves state-of-the-art performance with significant improvements over all previous models.",
"As shown in Figure 1, the proposed model consists of three components:",
"1) the c ontextual h ierarchical a ttention n etwork (CHAN);",
"2) the state transition prediction module;",
"3) the adaptive objective.",
"We share all the model parameters for each slot to keep our model universal for all slots.",
"2.1 Problem Statement Given a dialogue X = { ( U 1 , R 1 ) , ..., ( UT , RT ) } of T turns where U t represents user utterance and R t represents system response of turn t , we define the dialogue state at each turn t as B t = { ( s, v t ) , s S} where S is a set of slots and v t is the corresponding value of the slot s .",
"Following Lee et al. (2019), we use the term slot to refer to the concatenation of a domain name and a slot name in order to represent both domain and slot information.",
"For example, restaurant-food .",
"Similar to (Ren et al., 2018; Lee et al., 2019), we decompose the dialogue state tracking to a multi-label classification problem where we score each value with slot-related features in a non-parametric way and then choose the best candidate.",
"We also add a literally none into the value set of each slot to represent that no corresponding value is tracked.",
"Recently the pre-trained BERT language model (Devlin et al., 2019) shows powerful ability in universal contextual semantics representation, thus we employ BERT to encode utterances, slots and values.",
"To better retrieve relevant context from dialogue history, we devise Slot-Word Attention and Slot-Turn Attention to query both relevant keywords and turns.",
"Specifically, we exploit a Context Encoder between word-level and turn-level attention to capture contextual representations of relevant information from dialogue history.",
"Furthermore, we devise a Global-Local Fusion Gate to balance the information from global context and local utterances.",
"Sentence Encoder.",
"BERT leverages a special token [CLS] to aggregate the whole representation of a sentence and a special token [SEP] to indicate the end of a sentence.",
"where BERT finetune means that it will be fine-tuned during training.",
"Therefore, BERT finetune will learn a corresponding generalization of sentence representations and adapt to dialogue state tracking task.",
"For slot s and value v t , we adopt another pre-trained BERT fixed to encode them into contextual semantics vectors h s and h vt respectively.",
"Different from utterances, we use the output vector of the special token [CLS] to obtain the whole sentence representation: h s = BERT fixed ( s ) (2) h vt = BERT fixed ( v t ) where the weights of BERT fixed are fixed during training thus our model can be scalable to any unseen slots and values with sharing the original BERT representation.",
"Slot-Word Attention .",
"The slot-word attention is a multi-head attention (MultiHead( Q , K , V )), which takes a query matrix Q , a key matrix K and a value matrix V as inputs.",
"Refer to (Vaswani et al., 2017) for more details.",
"For each slot s , the slot-word attention summarizes word-level slot-related information from each turn t into a d -dimensional vector c words,t , which can be determined as follows: c words,t = MultiHead( h s , h t , h t ) (3) Context Encoder .",
"The context encoder is a unidirectional transformer encoder, which is devised to model the contextual relevance of the extracted word-level slot-related information among { 1 , ..., t } turns.",
"The context encoder contains a stack of N identical layers.",
"Each layer has two sub-layers.",
"The first sub-layer is a masked multi-head self-attention ( MultiHead ), in which Q = K = V .",
"The second sub-layer is a position-wise fully connected feed-forward network ( FFN ), which consists of two linear transformations with a ReLU activation (Vaswani et al., 2017).",
"where m n is the output of the n -th layer of context encoder and PE( ) denotes positional encoding function.",
"Note that residual connection and layer normalization are omitted in the formula.",
"Slot-Turn Attention .",
"To retrieve turn-level relevant information from contextual representation, we devise a slot-turn attention which is the multihead attention as follows: c turns,t = MultiHead( h s , c ctxs, t , c ctxs, t ) (6) Therefore, the model can access word-level and turn-level relevant information from the historical dialogues.",
"Global-Local Fusion Gate .",
"To balance the information of global context and local utterances, we propose to dynamically control each proportion of contextual information and current turn information so that the model can not only benefit from relevant context but also keep a balance between global and local representations.",
"Similar to Hochreiter and Schmidhuber (1997), we leverage a fusion gate mechanism, which computes a weight to decide how much global and local information should be combined according to c words,t and c turns,t .",
"It can be defined as follows: g s,t = ( W g (cid:12) [ c words,t ; c turns,t ]) (7) c gates,t = g s,t c words,t + (1 g s,t ) c turns,t where W g R 2 d d are parameters, means sigmoid activation function, (cid:12) and mean the pointwise and element-wise multiplication respectively.",
"P TP where V s is the candidate value set of slot s and v t V s is the ground-truth value of slot s .",
"To better capture relevant context, we further introduce an auxiliary binary classification task to jointly train with DST: State Transition Prediction",
"(STP), which is to predict if the value for a slot is updated compared to previous turn.",
"This module reads c gates,t 1 and c gates,t as inputs and the transition probability p stps,t can be calculated as follows: c stps,t = tanh( W c (cid:12) c gates,t ) (10) p stps,t = ( W p (cid:12) [ c stps,t ; c stps,t 1 ]) where W c R d d , W p R 2 d are parameters.",
"Note that when t = 1 , we simply concatenate c stps,t with zero vectors.",
"For this task, we calculate the binary cross entropy loss between ground-truth transition labels y stps,t and the transition probability p stps,t , which is defined as follows: L stp = X s S TX t =1 y stps,t log( p stps,t ) (11) 2.4 Adaptive Objective Essentially, the slot imbalance problem can be considered as a kind of class imbalance because there is an imbalance among both different slots and different samples.",
"Instead of treating all slots indiscriminately, it is important to balance the learning of different slots.",
"Recently, Lin et al. (2017) propose a soft-sampling method, Focal Loss, to re-weight the losses of different classes.",
"Inspired by their work, we design a novel adaptive objective for DST which evaluates the difficulty from each slot's accuracy on the validation set and adaptively adjusts the weight of each slot during optimization.",
"We define the accuracy of slot s on validation set as acc vals .",
"Our adaptive objective is based on the following intuitions: (1) If acc vals acc vals 0 ; then slot s is more difficult than slot s 0 .",
"Suppose this slot-level difficulty is defined as ; then s = 1 acc vals P s 0 S 1 acc vals 0 |S| (12) (2) Suppose there are two samples { ( U t , R t ) , ( s, v t ) } and { ( U t 0 , R t 0 ) , ( s 0 , v t 0 ) } .",
"If the former confidence is lower than the latter, then sample { ( U t , R t ) , ( s, v t ) } is more difficult than { ( U t 0 , R t 0 ) , ( s 0 , v t 0 ) } .",
"Suppose this sample-level difficulty is defined as ; then ( s, v t ) = (1 p ( s, v t )) (13) where p ( s, v t ) is the confidence of sample { ( U t , R t ) , ( s, v t ) } and is a hyper-parameter.",
"L adapt ( s, v t ) = s ( s, v t ) log( p ( s, v t )) (14) Focal Loss assigns static learning weights on slots and doesn't change them anymore during the whole training.",
"Compared to Focal Loss, our adaptive objective can fit data better by dynamically evaluate the difficulties in an accuracy-sensitive manner and then adaptively control the learning weights for different slots, which is proved in our experiments.",
"If the difficulty of slot s is greater than the average difficulty of all slots, s would increase and enlarge the loss of s .",
"Similarly, the optimization of sample { ( U t , R t ) , ( s, v t ) } with a low confidence p ( s, v t ) would be encouraged by a larger loss.",
"When an epoch ends, the adaptive objective re-evaluates the difficulty of each slot and updates s .",
"Therefore, it can not only encourage the optimization of those hard slots and samples but also balance the learning of all slots.",
"In our model, we firstly jointly train the DST and STP tasks to convergence and then fine-tune DST",
"task with the adaptive objective.",
"During joint training, we optimize the sum of these two loss functions as following: L joint = L dst + L stp (15) At the fine-tuning phase, we adopt the adaptive objective to fine-tune DST task as following: L finetune = X s S TX t =1 L adapt ( s, v t ) (16) 3 Experiments Setup 3.1 Datasets & Metrics Hotel Train Attraction Restaurant Taxi Slots price, type,parking,stay,day,people,area,stars,internet,name destination,departure,day,arriveby,leaveat,people area,name,type food,price,area,name,time,day,people destination,departure,arriveby,leaveby Train 3381 3103 2717 3813 1654 Valid 416 484 401 438 207 Test 394 494 395 437 195 Table 2: The dataset statistics of MultiWOZ 2.0 & 2.1.",
"public task-oriented dialogue datasets, including about 10,000 dialogues with 7 domains and 35 domain-slot pairs.",
"MultiWOZ 2.1 shares the same dialogues with MultiWOZ 2.0 but it fixed previous annotation errors.",
"The statistics are shown in Table 2.",
"Following (Wu et al., 2019), we use only 5 domains { restaurant , hotel , train , attraction , taxi } excluding hospital and police since these two domains never occur in the test set.",
"We preprocess the datasets following (Lee et al., 2019) 2 .",
"We use joint accuracy and slot accuracy as our evaluation metrics.",
"Joint accuracy is the accuracy of the dialogue state of each turn and a dialogue state is evaluated correctly only if all the values of slots are correctly predicted.",
"Slot accuracy only considers individual slot-level accuracy.",
"3.2 Baseline Models We compare our results with the following competitive baselines: DSTreader proposes to model DST as a machine reading comprehension task and extract spans from dialogue history (Gao et al., 2019b).",
"GLAD-RCFS uses a heuristic rule to extract relevant turns and lets slot-value pairs to query relevant context from them (Sharma et al., 2019).",
"HyST employs a hierarchical encoder and takes a hybrid way combining both predefined-ontology and open-vocabulary settings (Goel et al., 2019).",
"TRADE encodes the whole dialogue context and decodes the value for every slot using a copy-augmented decoder (Wu et al., 2019).",
"DST-QA proposes to model DST as a question answering problem and uses a dynamically-evolving knowledge graph to learn relationships between slot pairs (Zhou and Small, 2019).",
"SOM-DST considers the dialogue state as an explicit fixed-size memory and proposes a selectively overwriting mechanism (Kim et al., 2019).",
"SUMBT exploits BERT as the encoder of the utterances, slots and values.",
"It scores every candidate slot-value pair in a non-parametric manner using a distance measurement (Lee et al., 2019).",
"candidate values and slot-context encoding considering all slots as picklist-based slots (Zhang et al., 2019).",
"GLAD-RCFS, HyST, SUMBT, DST-picklist are predefined-ontology models as well as our model and DSTreader, TRADE, DST-QA, SOM-DST are open-vocabulary models.",
"We employ the pre-trained BERT model that has 12 layers of 784 hidden units and 12 self-attention heads 3 .",
"For the multi-head attention, we set heads count and hidden size to 4 and 784, respectively.",
"For the context encoder, we set the transformer layers to 6.",
"We set the max sequence length of all inputs to 64 and the batch size to 32.",
"In all training, we use Adam optimizer (Kingma and Ba, 2015) and set the warmup proportion to 0.1.",
"Specifically, in the joint training phase, we set the peak learning rate to 1e-4.",
"At the fine-tuning phase, we set to 2, peak learning rate to 1e-5.",
"The training stopped early when the validation loss was not improved for 15 consecutive epochs.",
"For all experiments, we report the mean joint accuracy over multiple different random seeds to reduce statistical errors.",
"To prove the effectiveness of each module of the proposed CHAN, we conduct ablation experiments RETRACTED This paper was retracted.",
"4 Experiment Results 4.1 Main Results Table 3 shows the joint accuracy of our model and other baselines on the test sets of MultiWOZ 2.0 and 2.1.",
"Our model beats all baselines whether they are based on predefined ontology or open vocabulary, and achieves 52.68% and 58.55% joint accuracy with considerable improvements (1.24% and 5.98%) over previous best results on MultiWOZ 2.0 and 2.1, respectively.",
"Also, our model achieves 97.69% and 98.14% slot accuracy with 0.36% and 0.58% improvements over the previous best results on MultiWOZ 2.0 and 2.1, respectively.",
"Similar to (Kim et al., 2019), we find that our model achieves much higher improvements on MultiWOZ 2.1 than 3 It is published as bert-base-uncased model in https://github.com/huggingface/pytorch-transformers Model MultiWOZ 2.1 Our Model 58.55 state transition prediction 57.86 (-0.69) adaptive objective fine-tuning 57.45 (-1.10) above two (only CHAN) 57.00 (-1.55) Our Model (FL ( =1, =2)) 58.10 (-0.45) Table 4: The ablation study of the state transition prediction and the adaptive objective on the MultiWOZ 2.1 test set with joint accuracy (%).",
"means fine-tuning with focal loss instead.",
"that on MultiWOZ 2.0.",
"This is probably because MultiWOZ 2.1 fixes lots of notation errors in MultiWOZ 2.0 and our model can benefit more from more accurate relevant context.",
"As shown in Table 4, we estimate the effectiveness of the proposed state transition prediction and adaptive objective on the MultiWOZ 2.1 test set.",
"The results show that both state transition prediction task and adaptive objective can boost the performance.",
"Removing the state transition prediction task reduces joint accuracy by 0.69%, and the joint accuracy decreases by 1.10% without the adaptive objective fine-tuning.",
"Moreover, when we remove the state transition prediction task and don't fine-tune our model with adaptive objective (only CHAN remains), the joint accuracy decreases by 1.55%.",
"Also, to explore the importance of adjusting the s adaptively, we replace the adaptive objective with original focal loss ( = 1 , = 2 ), which leads to 0.45% drop.",
"on the MultiWOZ 2.1 test set as shown in Table 5.",
"We observe that a slight joint accuracy drop of 0.24% after removing the global-local fusion gate, which proves the effectiveness of fusing global context and local utterances.",
"Moreover, removing the slot-turn attention and context encoder leads to a decrease by 0.15% and 1.72% respectively, which demonstrates that the turn-level relevant information and the contextual representations of word-level relevant information are effective to improve the performance.",
"Moreover, after we remove the aforementioned three modules and sum the word-level relevant information of { 1 , , t } turns as output, the joint accuracy reduces by 6.72%, which is much higher than the sum of above three reductions.",
"It demonstrates that effectively modeling interactions with word-level relevant information of dialogue history is crucial for DST.",
"Figure 2 shows the visualization of turn-level and word-level attention of the restaurant-name slot on a prediction example of our model at turn 5.",
"while almost pays no attention to turns { 1,2 } .",
"And from the word-level attention visualization, we can easily find that the restaurant-name slot attends to the dojo noodle bar with the highest weight in both turn 3 and turn 4.",
"Although there is no slot-related information at turn 5, our model still makes the correct decision by exploiting relevant context from the historical dialogue.",
"4.4 Effects of Adaptive Obj.",
"on Acc.",
"per Slot As Figure 3 shows, we draw the accuracy changes of each slot on MultiWOZ 2.1 test set after fine-tuning our model with adaptive objective.",
"quency (The detailed accuracy results are in the Appendix).",
"Thus, slots on the left side are relatively more difficult than slots on the right side.",
"After fine-tuning with the adaptive objective, most slots on the left side achieve significant improvements, which proves the adaptive objective can encourage the learning of the hard slots.",
"Although adaptive objective tends to decrease the weight of slots on the right side, they also benefit from the fine-tuning.",
"We think that this is because encouraging the optimizing of hard slots enhances our model by tracking more complicated dialogue states.",
"It proves that our adaptive objective can not only improve the performance of relatively hard slots but also boost the performance of relatively easy slots.",
"4.5 Qualitative Analysis To explore the advantages of our model compared to baseline models, we conduct a human evaluation on a subset of the MultiWOZ 2.1 test set where our model makes correct predictions while SUMBT (a previous strong baseline) fails.",
"We predefine three types of improvements: historical information inference improvement which means inferring historical information is necessary for correct decisions, current information inference improvement which means inferring current information is enough for correct decisions, and other improvements.",
"As shown in Table 6, 64.49% improvements come from historical information inference, which demonstrates that our model can better exploit relevant context from the dialogue history.",
"The general backbone of our model is a hierarchical attention network that can effectively aggregate query-related information at multiple levels (Yang RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563 . 6330 et al., 2016; Ying et al., 2018; Wang et al., 2018; Xing et al., 2018; Aujogue and Aussem, 2019; Naik et al., 2018; Liu and Chen, 2019).",
"Traditional statistical dialogue state tracking models combine semantics extracted by spoken lan-Improvement",
"guage understanding modules to predict the current dialogue state (Williams and Young, 2007; Thomson and Young, 2010; Wang and Lemon, 2013; Williams, 2014) or to jointly learn speech understanding (Henderson et al., 2014b; Zilka and Jur-cicek, 2015; Wen et al., 2017).",
"One drawback is that they rely on hand-crafted features and complex domain-specific lexicons besides the ontology, and they are hard to extend and scale to new domains.",
"Recent neural network models are proposed for further improvements (Mrksic et al., 2015; Hori et al., 2016; Mrksic et al., 2017; Lei et al., 2018; Xu and Hu, 2018; Zhong et al., 2018; Nouri and Hosseini-Asl, 2018; Wu et al., 2019; Ren et al., 2019; Balaraman and Magnini, 2019).",
"Ren et al. (2018) and Lee et al. (2019) use an RNN to encode the slot-related information of each turn, where slots can not attend to relevant information of past turns directly.",
"Sharma et al. (2019) employ a heuristic rule to extract partial dialogue history and then integrate the historical information into prediction in a coarse manner.",
"Goel et al. (2019) encode the dialogue history into a hidden state and then simply combine it with the slot to make decisions.",
"These models are deficient in fully exploiting the relevant context in dialogue history.",
"Gao et al. (2019b) introduce a slot carryover model to decide whether the values from the previous turn should be used or not and Kim et al. (2019) introduce a state operation predictor to decide the operation with the previous state.",
"Different from them, we consider the state transition prediction as an additional enhancement while they integrate it into their DST pipelines.",
"Besides, Zhong et al. (2018) only employ local modules to model the slot-specific representations, which neglects the slot imbalance problem.",
"We introduce an effective model that consists of a contextual hierarchical attention network to fully exploit relevant context from dialogue history and an adaptive objective to alleviate the slot imbalance problem in dialogue state tracking.",
"Experimental results show that our model achieves state-of-the-art performance of 52.68% and 58.55% joint accuracy with considerable improvements (+1.24% and +5.98%) over previous best results on MultiWOZ 2.0 and MultiWOZ2.1 datasets, respectively.",
"Although our model is based on predefined ontology, it is universal and scalable to unseen domains, slots and values.",
"The main contributions of our model, CHAN and adaptive objective, can also be applied to open-vocabulary models.",
"We will explore it in the future.",
"We thank the anonymous reviewers for their insightful comments.",
"This work was supported by National Key R&D Program of China (NO. 2017YFE0192900).",
"RETRACTED This paper was retracted."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"method",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"result",
"abstain",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"Summarizing biomedical discovery from genomics data using natural languages is an essential step in biomedical research but is mostly done manually.",
"Here, we introduce Textomics, a novel dataset of genomics data description, which contains 22,273 pairs of genomics data matrices and their summaries.",
"Each summary is written by the researchers who generated the data and associated with a scientific paper.",
"Based on this dataset, we study two novel tasks: generating textual summary from a genomics data matrix and vice versa.",
"Inspired by the successful applications of k nearest neighbors in modeling genomics data, we propose a k NN-Vec2Text model to address these tasks and observe substantial improvement on our dataset.",
"We further illustrate how Textomics can be used to advance other applications, including evaluating scientific paper embeddings and generating masked templates for scientific paper understanding.",
"Textomics serves as the first benchmark for generating textual summaries for genomics data and we envision it will be broadly applied to other biomedical and natural language processing applications.",
"1 1 Introduction Modern genomics research has become increasingly automated through being roughly divided into three sequential steps: next-generation sequencing technology produces a massive amount of genomics data, which are in turn processed by bioin-formatics tools to identify key variants and genes, and, ultimately, analyzed by biologists to summarize the discovery (Goodwin et al., 2016; Kanehisa and Bork, 2003).",
"In contrast to the first two steps that have been automated by new technologies and Equal Contribution 1 The link to access our code: https://github.com/ amos814/Textomics software, the last step of summarizing discovery is still largely performed manually, substantially slowing down the progress of scientific discovery (Hwang et al., 2018).",
"A plausible solution is to automatically summarize the discovery from genomics data using neural text generation, which has been successfully applied to radiology report generation (Wang et al., 2021; Yuan et al., 2019) and clinical notes generation (Melamud and Shiv-ade, 2019; Lee, 2018; Miura et al., 2021).",
"In this paper, we study this novel task of generating sentences to summarize a genomics data matrix.",
"Several excisting approaches demonstrate encouraging results in generating short phrases to describe functions of a set of genes (Wang et al., 2018; Zhang et al., 2020; Kramer et al., 2014).",
"However, our task is fundamentally different from these: the input of our task is a matrix that contains tens of thousands of genes, which could be noisier than a set of selected genes; the outputs of our task are sentences instead of short phrases or controlled vocabularies.",
"To study this task, we curate a novel dataset, Textomics, by integrating data from PMC, PubMed, and Gene Expression Omnibus (GEO) (Edgar et al., 2002) ( Figure 1).",
"GEO is the default database repository for researchers to upload their genomics data matrices, such as gene expression matrices and mutation matrices.",
"Each genomics data matrix in GEO is a sample by feature matrices, where samples are from often humans or mice that are sequenced together to study a specific biological problem, and features are genes or variants.",
"Each matrix is also associated with a few sentences that are written by researchers to summarize this data matrix.",
"After pre-processing, we obtain 22,273 matrix summary pairs, spanning 9 sequencing technology platforms.",
"Each matrix has on average 2,475 samples and 22,796 features.",
"Each summary has on average 46 words.",
"We further propose a novel approach to automatically generate a summary from a genomics data matrix, which is highly noisy and high-dimensional.",
"k nearest neighbor ( k NN) approaches have obtained great success in genomics data by capturing the hidden modules within it (Levine et al., 2015; Baran et al., 2019).",
"The key idea of our method is to find k nearest summaries according to the genomics data similarity and then exploit the attention mechanism to convert these k nearest summaries to a new summary.",
"Our method obtained substantial improvement in comparison to baseline approaches.",
"We further illustrated how we can generate a genomics data matrix from a given summary, offering the possibility to simulate genomics data from textual description.",
"We then introduced how Textomics can be used as a novel benchmark for measuring scientific paper similarity and evaluating scientific paper understanding.",
"To the best of our knowledge, Textomics and k NN-Vec2Text together build up the first large-scale benchmark for genomics data summary generation, and can be broadly applied to a variety of natural language processing tasks.",
"Our paper is written as follows: We first introduce the Textomics dataset (section 2) and describe the Text2Vec and Vec2Text tasks (section 3).",
"We then propose a baseline model and k NN-Vec2Text model for Vec2Text task (section 4.1) and the model for Text2Vec task.",
"We then evaluate our method (section 5) and provide two applications (section 6) based on Textomics dataset.",
"We then discussed the related works and the potential direction of future works (section 7 and 8).",
"We collected genomics data matrices from Gene Expression Omnibus (GEO) (Edgar et al., 2002).",
"The feature of each data matrix represents the expression level of a gene or other genomic measurements of a variant (typically real numbers).",
"The sample of each matrix is an experimental subject, such as an experimental animal or a patient.",
"Each data matrix is associated with an expert-written summary, describing this data matrix.",
"We obtained in total 164,667 matrix-summary pairs, spanning 12,219 sequencing platforms.",
"Samples in different platforms have different features.",
"However, data matrices belonging to the same sequencing platform are from the same species and share the same set of features, thus can be used together for model training.",
"To further alleviate the missing feature problem, we kept the top-20000 features with a lower missing rate and filtered out the rest.",
"We further selected 9 platforms with the average lowest rate of missing value and the largest amount of matrix-summary pairs to guarantee the quality and the scale of the dataset.",
"After all, we imputed the resulted data matrices using averaging imputation across different features.",
"Data matrices belonging to the same platform have distinct samples (e.g., patient samples collected from two hospitals).",
"To make them com-4879 parable and provide fixed-size features for machine learning models, we empirically used a five-number summary to represent each data matrix.",
"In particular, we calculated the smallest, the first quartile, the median, the third quartile, and the largest value of each feature across samples in a specific data matrix.",
"We then concatenated these values of all features, resulting in a 100k-dimensional feature vector for each data matrix.",
"Compared with other statistics such as mean, median, and mode of the features, the five number statistics maintain the patterns hidden in the raw matrices better.",
"This vector will be finally used as the input to the machine learning model.",
"All genomics data summaries we collected were written by the biologists who generate the corresponding genomics data matrices.",
"Therefore, these summaries can properly reflect biologists' descriptions of their datasets.",
"Since the summary is the first piece of information that one can learn about the dataset, authors often tend to clearly characterize their dataset in the summary.",
"However, directly leveraging raw data of these summaries is questionable.",
"On the syntactic level, the lengths of summary for each sample are different and comments are often used in genomics descriptions.",
"In order to align our data and leverage the advanced Transformer model that requires fix-length sentences as well as simplifies the structure of the summary, we empirically removed the text in the brackets and truncated the summaries length to 64 words (the percentage of summaries with a length greater than 64 is 41%).",
"On the semantic level, there could be non-informative summaries such as a simple sentence Please see our data below' and some outliers that are substantially different from other summaries.",
"In order to increase the quality of these genomics data summaries, we manually inspected and removed the non-informative summary and excluded the outliers based on the pairwise BLEU (Papineni et al., 2002) scores through a progres-sive automated procedure.",
"Specifically, for every summary, we treated it as the query text and calculated the pairwise BLEU-1 scores with all other summaries, filtered out those median that is lower than 0.09, and then re-applied the procedure with a higher threshold of 0.13.",
"Finally, each of the 9 platforms contains 471 matrix-summary pairs on average, presenting a desirable number of training samples to develop data summary generation models.",
"We summarized the statistics of these 9 platforms in Supplementary Table S1.",
"Some of the data matrices are associated with a scientific paper, which describes how the authors generated and used the data.",
"Therefore, the data matrix and the summary can be used to help embed these papers.",
"We additionally retrieved these papers from PubMed and PMC databases according to the paper titles enclosed in GEO.",
"We obtained the full text for those 7,691 freely accessible ones ( Supplementary Table S1).",
"We will introduce two applications that jointly use scientific papers and matrix-summary pairs in section 6.",
"We aim to accelerate genomics discovery by generating a textual summary given the five-number summary-based vector of a genomics data matrix.",
"We refer to the five-number summary-based vector as a gene feature vector for simplicity.",
"Specifically, consider textual summary domain D and gene feature vector domain V , let D = { DD , DV } = { ( d i , v i ) } Ni =1 dist P ( D , V ) be a dataset containing N summary-vector pairs sampled from the joint distribution of these two domains, where d i (cid:44) (cid:104) d 1 i , d 2 i , ..., d n di i (cid:105) denotes a token sequence and v i R l v denotes the gene feature vector.",
"Here d ji C , C is the vocabulary.",
"We now formally define two cross-domain gener-0.0 0.2 0.4 0.6 0.8 1.0 Gene feature vector similarity 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 S u m m a r y e m b e dd i n g s i m il a r i t y Spearman correlation=0.45 d e n s i t y Figure 2: Density plot showing the Spearman correlation between text-based similarity (y-axis) and vector-based similarity (x-axis) on sequencing platform GPL6246.",
"ation tasks, Vec2Text and Text2Vec, based on our 4880 dataset.",
"Given a gene feature vector v i , Vec2Text aims to generate a summary d i that could best describe this vector v i ; given a textual summary d i , Text2Vec aims to generate the gene feature vector v i that d i describes.",
"Since we are studying a novel task on a novel dataset, we first examined the feasibility of this task.",
"To this end, we obtained the dense representation of each textual summary using the pre-trained SPECTER model (Cohan et al., 2020) and use these representations to calculate a summary-based similarity between each pair of summaries.",
"We also calculated a vector-based similarity based on the gene feature vector using the cosine similarity.",
"We found that these two similarity measurements show a substantial agreement ( Figure 2, Supplementary Table S2).",
"After fil-tering out the outliers, all 9 platforms achieved a Spearman correlation greater than 0.2, suggesting the possibility to generate textual summary from the gene feature vector and vice versa.",
"We first introduce a baseline model that tries to encode gene feature vectors into the semantic embedding space and then decodes it to generate text.",
"The baseline model contains a word embedding function Emb(.), a gene feature vector encoder Enc v (.) and a decoder Dec v (.).",
"Given a gene feature vector v i , the encoder will first embed the data into a semantic representation space s (0) i = Enc v ( v i ) , and then the decoder will start from this representation for the text generation.",
"The generation process is autoregressive.",
"It generates j-th word d ( j ) i and its embedding s ( j ) i as: P ( d ( j ) i | s ( <j ) i ) = Dec v ( s ( <j ) i ) , j = 1 , ..., n d i .",
"(1) Then we sample the next word and obtain its embedding as: s ( j ) i = Emb ( d ( j ) i ) , d ( j ) i sample P ( d ( j ) i | s ( <j ) i ) .",
"(2) This model is trained using the following loss function: L baseline = 1 | DV | | DV | (cid:88) i =1 n di (cid:88) j =1 log P ( d ( j ) i | s ( <j ) i ) .",
"The baseline model attempts to learn an encoder that projects a gene feature vector to a semantic representation.",
"However, the substantial noise and the high-dimensionality of the gene feature vector pose great challenges to effectively learn that projection.",
"k -nearest neighbors models have been extensively used as the solution to overcome such issues in genomics data analysis (Levine et al., 2015; Baran et al., 2019).",
"Therefore, one plausible solution is to explicitly leverage summaries from similar gene feature vectors to improve the generation.",
"Inspired by the encouraging performance in using k -nearest neighbors ( k NN) in seq2seq models (Khandelwal et al., 2020, 2021) and genomics data analysis (Levine et al., 2015; Baran et al., 2019), we propose to convert the Vec2Text problem to a Text2Text problem according to the k -nearest neighbor of each vector.",
"For a given gene feature vector g , we use e i R to denote its Euclidean distance to another gene feature vectors v i in D .",
"We then select the summaries of k samples that have the minimum Euclidean distances as the reference summary list t = [ d j 1 , ..., d j k ] , where j m { 1 , 2 , ..., | D |} denotes the index of ordered summaries w.r.t the Euclidean distance, i.e, e j 1 e j 2 ... e j | D | .",
"In addition to alleviating the noise in genomics data using the reference summary list (Levine et al., 2015; Baran et al., 2019), our method explicitly converts the Vec2Text problem to a Text2Text problem, and can thus seamlessly incorporate many advanced pre-trained language models into our framework.",
"The resulted problem we need to solve is a k sources to one target generation problem.",
"One naive solution is to concatenate the k reference summaries together.",
"However, this concatenation will make the source text much longer than the target text and how to order each summary during concatenation also remains unclear.",
"Instead, we propose to transform this problem into k one-to-one generation problem and then use attention-based strategy to fuse them.",
"Concretely, let n j = max { n j 1 , ..., n j k } be the maximum length among all the reference summaries.",
"We first get the representation of each summary x j m = Emb ( d j m ) = (cid:104) x (1) j m , ..., x ( n j ) j m (cid:105) for m = 1 , ..., k .",
"Here x ( i ) j m denotes the vector embedding of the i-th word in m-th summary.",
"We construct fixed-length reference summaries by padding after the end of each summary with length less than n j .",
"We then utilize self-attention module (SA) (Vaswani et al., 2017) to get the aggregated embedding of each reference with their embeddings as well as the gene feature vector distance e i .",
"Let Q r , K r , V r be the query, key, value matrices of embedding 4881 sequence r = (cid:104) r (1) , ..., r ( l r ) (cid:105) , we have: SA ( r ) = Attention ( Q r , K r , V r ) .",
"(4) We then calculate the attention score as following: a j m = SA ( (cid:104) x (1) j m , ..., x ( n jk ) j m (cid:105) ) , (5) sc j = SA ( (cid:104) e j 1 a j 1 , ..., e j k a j k (cid:105) ) , (6) where sc j = [ sc j 1 , ..., sc j k ] R k .",
"Here we used a 2-layer self attention scheme to first acquire the aggregated feature of each summary and then calculate the attention score based on that.",
"The fi-nal score is then calculated based on the attention scores and temperature as: w j m = exp ( sc j m ) (cid:80) kl =1 exp ( sc j l ) .",
"(7) Then, we aggregate embedding sequences by taking weighted averages: x ( l ) j = k (cid:88) m =1 w j m x ( l ) j m , l = 1 , ..., n j .",
"(8) Let P <l,x ( d ) = P LM ( d ( l ) | d ( <l ) , x ) , 0 < l < n d be the probability distribution of d ( l ) output by the language model LM conditioned on the sequences of the embedding vectors x and the first l 1 sequence tokens.",
"We feed the aggregated embedding sequences into the language model to reconstruct the summary d using an autoregressive-based loss function: L k NN-Vec2Text = (cid:88) d DD n d (cid:88) l =1 log P <l, x j ( d ) | DD | .",
"(9) 4.2 Text2Vec We model the reverse problem of generating the gene feature vector v from a textual summary d as a regression problem.",
"Our model is composed with a semantic encoder Enc d ( . ) and a readout head MLP ( . ) .",
"Specifically, the encoder will embed the textual summary into dense representation x = Enc d ( d ) , and the readout head will map the representation to the gene feature vector v = MLP ( x ) .",
"Then we train this model by minimizing the rooted mean squared errors (RMSE): L v = (cid:115) 1 | DV | (cid:88) v i DV || v i v i || 22 .",
"To evaluate the performance of k NN-Vec2Text on the task of Vec2Text, we compared it to the baseline models in 4.1.",
"For the baseline models, we used a one layer MLP network as its encoder, and tested with different decoder structure, including canonical Transformer (decoder of T5) (Vaswani et al., 2017), GPT-2 (Radford et al., 2019), and Sent-VAE (Bowman et al., 2016).",
"For k NN-Vec2Text, we directly used both the encoder and the decoder of T5 (Raffel et al., 2020), one of the state-of-the-art Transformer style models.",
"we set k = 4 and = 0 .",
"1 as this setting achieved the best empirical performance, though it is worth noting that our model is robust on the choices of k (from 1 to 4) and (from 0 to 1).",
"For all 9 platforms, we reported the average performance under 5-fold cross validation to evaluate the robustness of our method.",
"The results of BLEU-1 score (Pa-pineni et al., 2002) are summarized in Figure 3a .",
"We found that k NN-Vec2Text substantially outperformed other methods by a large margin.",
"Specifically, k NN-Vec2Text obtained a 0.206 BLEU-1 score on average while none of the other three methods achieved an average BLEU-1 score greater than 0.150.",
"The prominent performance of our method demonstrates the effectiveness of using a k -nearest-neighbor approach to convert the Vec2Text problem to a Text2Text problem.",
"To further understand the superior performance of the k NN-Vec2Text model, we presented a case study in Table 1 .",
"In this case study, the generated summary is highly accurate compared to the ground truth summary.",
"By examining the summaries of the 4 nearest neighbors in the gene feature vector space, we found that the generated summary is composed of short spans from each individual neighbor, again indicating the advantage of using a k -nearest neighbor for this task.",
"Our method leveraged an attention mechanism to unify these four neighbors, thus offering an accurate generation.",
"We also observed consistent improvement of our method over comparison approaches on other metrics and summarized the results in Supplementary Table S3.",
"We next used the Text2Vec task to illustrate how our dataset can be used to compare the performance of different pre-trained language models.",
"In particular, we compared a recently proposed scientific paper embedding method SPECTER (Cohan et al., 2020), which has demonstrated prominent performance in a variety of scientific paper analysis tasks, with SciBERT (Beltagy et al., 2019), BioBERT (Lee 4882 GPL96 GPL6244 GPL10558 GPL6887 GPL198 GPL570 GPL6246 GPL1261 GPL13534 Textomics platform 0.0 0.1 0.2 0.3 BLEU1 s c o r e kNN-Vec2Text GPT-2 Sent-VAE Transformer GPL6887 GPL10558 GPL1261 GPL96 GPL198 GPL570 GPL6244 GPL6246 GPL13534 Textomics platform 10% 30% 50% 70% I m p r o v e m e n t o v e r b a s e li n e SPECTER SciBERT SentBERT BioBERT BERT a b Figure 3: Performance on Vec2Text",
"et al., 2020) and SentBERT (Wang and Kuo, 2020) and the vanilla BERT (Devlin et al., 2019).",
"While the other language models directly take the token sequence as the input, SPECTER model needs to take both the abstract and the title.",
"To make a fair comparison, we concatenated the title and the summary as the input for models other than SPECTER.",
"For all 9 platforms, we reported the average performance under 5-fold cross validation.",
"We further implemented a simple averaging baseline approach that predicts the vector for a test summary according to the average vectors of training samples.",
"This baseline does not utilize any textual summary and can thus help us assess the effect of using textual summary information in this task.",
"We used RMSE to evaluate the performance of all methods.",
"We reported the RMSE improvement of each method over the averaging baseline model in Figure 3b .",
"We found that all methods outperform the baseline approaches by gaining at least 15% improvement, indicating the importance of considering textual summary in this task.",
"SPECTER achieved the best overall performance among all five methods, suggesting the advantage of separately modeling the title and the abstract when embedding scientific papers.",
"Embedding scientific papers is crucial to effectively identify emerging research topics and new knowledge from scientific literature.",
"To this end, many machine learning models have been proposed to embed scientific papers into dense embeddings and then applied these embeddings for a variety of downstream applications (Cohan et al., 2020; Lee et al., 2020; Wang and Kuo, 2020; Beltagy et al., 2019; Devlin et al., 2019).",
"However, there is currently limited golden standard that can measure the similarity between two papers.",
"As a result, existing approaches use surrogate metrics such as citation relationship, keywords, and user activities to evaluate their paper embeddings (Cohan et al., 4883 GPL198 GPL13534 GPL6244 GPL570 GPL10558 GPL6246 GPL1261 GPL6887 GPL96 Textomics platform 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Sp e a r m a n c o rr e l a t i o n Abstract Introduction Method Result Conclusion GPL198 GPL13534 GPL6244 GPL570 GPL10558 GPL6246 GPL1261 GPL6887 GPL96 Textomics platform 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Sp e a r m a n c o rr e l a t i o n SPECTER SentBERT SciBERT BioBERT BERT a b Figure 4: Performance on using Textomics as the benchmark to evaluate scientific paper embeddings.",
"Textomics can be used to measure these paper embedding approaches by examining the consistency between the embedding-based paper similarity and the embedding-based summary similarity since both the paper and the summary are written by the same authors.",
"In particular, for a pair of summaries d i , d j DD , let t i , t j be the text (e.g., abstracts) extracted from their corresponding scientific papers.",
"Let Enc d be the encoder of the paper embedding method we want to evaluate.",
"We first get their embeddings as: s d i , s d j = Enc d ( d i ) , Enc d ( d j ) R l s , (11) s t i , s t j = Enc d ( t i ) , Enc d ( t j ) R l s .",
"(12)",
"We then compute the pairwise Euclidean distance between all pairs of summaries and all pairs of paper text as: s d i,j = (cid:118)(cid:117)(cid:117)(cid:116) l s (cid:88) k =1 ( s ( k ) d i s ( k ) d j ) 2 R, (13) s t i,j = (cid:118)(cid:117)(cid:117) (cid:116) l s (cid:88) k =1 ( s ( k ) t i s ( k ) t j ) 2 R. (14) To evaluate the quality of the encoder Enc d , we can calculate the Spearman correlation between the pairwise summary similarity and the pairwise text similarity.",
"A larger Spearman correlation means the summary / textual contents of two samples in the pair are better aligned with each other, which indicates this Enc d is more accurate in embedding scientific papers.",
"As a proof-of-concept, we obtained the full text of 7,691 papers in our dataset from the freely accessible PubMed Central.",
"We segmented each paper into five sections, which included abstract, introduction, method, result and conclusion.",
"We first compared different paper embedding methods using the abstract of a paper.",
"The five embedding methods we considered are introduced in section 5.1.",
"Since SPECTER takes both the title and paragraph as the input we used the first sentence of the summary as a pseudo-title when encoding the summary.",
"The results are summarized in Figure 4a .",
"We found that SPECTER was substantially better than other methods on 8 out of the 9 platforms.",
"SPECTER is specifically developed to embed scientific papers by processing the title and the abstract separately, whereas other pre-trained language models simply concatenated the title and the abstract.",
"The superior performance of SPECTER suggests the importance of separately modeling paper title and abstract when embedding scientific papers.",
"SentBERT obtained the best performance among four pre-trained language models, partially due to its prominent performance in sentence-level embedding.",
"We further noticed that the relative performance among different methods is largely consistent with the previous work evaluated on other metrics (Cohan et al., 2020), demonstrating the high-quality of Textomics.",
"After observing the superior performance of SPECTER, we next investigated which section of the paper can be best used to assess paper similarity.",
"Although existing paper embedding approaches often leverage the abstract for embedding, other sections, such as introduction and results might also be informative, especially for papers describing a specific dataset or method.",
"We thus applied SPECTER to embed five different sections of each scientific paper and used Textomics to evaluate which section can best reflect paper similarity.",
"We observed a consistent improvement of using the abstract section 4884 in comparison to other paper sections ( Figure 4B), which is consistent with the intuition that the abstract represents a good summary of the scientific paper, again indicating the reliability of using Textomics to evaluate paper embedding methods.",
"Creating masked sentences and then filling in these masks can examine whether the machine learning model has properly understood a scientific paper (Yang et al., 2019; Guu et al., 2020; Ghazvininejad et al., 2019; Bao et al., 2020; Salazar et al., 2020).",
"However, one challenge in such research is how to generate masked sentences that are relevant to a given paper while also ensuring the answer is enclosed in the paper.",
"Our dataset could be used to automatically generate such masked sentences using the summary, which is highly relevant to the paper but also not overlapped with the paper.",
"In particular, we can mask out keywords from the summary and then use this masked summary as the question and let a machine learning model to find the answer from the non-overlapping scientific paper.",
"Let C bio be a dictionary that contains biological keywords we want to mask out from the summary, ( d i , t i ) be a pair of textual summary and paragraph text extracted from its corresponding scientific paper.",
"If the j -th word w i = d ( j ) i C bio in the summary belongs to C bio , our proposed task is to predict which word in C bio is the missing word in d masked given t i .",
"The masked summary d masked is the same as d i except its j -th word is substituted with [PAD].",
"For simplicity, we only mask at most one token in d i .",
"We, therefore, form our task as a multi-class classification problem.",
"Sim-GPL570 GPL6887 GPL6244 GPL13534 GPL6246 GPL1261 GPL10558 GPL198 GPL96 Textomics platform 0.1 0.3 0.5 0.7 A cc u r a c y CovocEupath Obi_iee ObiPlanp EcocoreXpo Argo TrakPremedonto Figure 5: Bar plot showing the accuracy of filling the masked sentences of ten biomedical categories across 9 platforms using Textomics as the benchmark.",
"the paragraph text t i .",
"To generate C bio , we leveraged a recently developed biological terminology dataset Graphine (Liu et al., 2021), which provides the biological phrases spanning 227 categories.",
"We selected 10 categories that can produce the largest number of masked sentences in Textomics.",
"We manually filtered ambiguous words and stop words.",
"On average, each category contains 317 keywords.",
"We used a fully connected neural network to perform the multi-class classification task.",
"The input feature is the concatenation of the masked summary embedding and the paragraph embedding.",
"We used SPECTER to derive these embeddings as it has obtained the best performance in our previous analysis.",
"The results are summarized in Figure 5 .",
"We observed improved accuracy on all ten categories, which are much better than the 0.4% accuracy by random guessing, indicating the usefulness of our benchmark in scientific paper understanding.",
"Finally, we found that the performance of each category varied across different platforms, suggesting the possibility to further improve the performance by jointly learning from all platforms.",
"Our task is related to existing works that take structured data as the input and then generate the unstructured text.",
"Different input data modalities and related datasets have been considered in the literature, including text triplets in RDF graphs (Gardent et al., 2017; Ribeiro et al., 2020; Song et al., 2020; Chen et al., 2020)), text-data tables (Lebret et al., 2016; Rebuffel et al., 2022; Dusek et al., 2020; Rebuffel et al., 2020; Puduppully and Lapata, 2021; Chen et al., 2020), electronic medical records (Lee, 2018; Guan et al., 2018), radiology reports (Wang et al., 2021; Yuan et al., 2019; Miura et al., 2021), and other continuous data modalities without explicit textual structures such as image (Lin et al., 2014; Cornia et al., 2020; Ke et al., 2019; Radford et al., 2021), audio (Drossos et al., 2020; Manco et al., 2021; Wu et al., 2021; Mei et al., 2021), and video (Li et al., 2021; Ging et al., 2020; Zhou et al., 2018; Li et al., 2020).",
"Different from these structures, our dataset takes a high dimensional genomics feature matrix as input, which doesn't exhibit structure and is thus substantially different from other modalities.",
"Moreover, our dataset is the first dataset that aims to convert genomics feature vector to textual summary.",
"The substantial noise and high-dimensionality of genomics data matrices 4885 further pose unique challenges in text generation.",
"Our k NN-Vec2Text model is inspired by the re-cent success in applying k NN-based language models to machine translation (Khandelwal et al., 2021) and language models (Khandelwal et al., 2020; He et al., 2021; Ton et al., 2021).",
"The main difference between our methods and their approaches is that while we try to leverage k NN in the genomics vector space to construct reference text, they use k NN in the text embedding space during the autoregressive generation process to help adjust the sample distribution.",
"Some other methods can be used to generate text from vectors, such as (Bowman et al., 2016; Song et al., 2019; Miao and Blunsom, 2016; Montero et al., 2021; Zhang et al., 2019).",
"Their inputs are latent vectors that need to be inferred from the data and do not have specific meanings, which are different from our gene feature vectors.",
"In this paper, we have proposed a novel dataset Textomics, containing 22,273 pairs of genomics matrices and their corresponding textual summaries.",
"We then introduce a novel task of Vec2Text based on our dataset.",
"This task aims to generate the textual summary based on the gene feature vector.",
"To address this task, we propose a novel method k NN-Vec2Text, which constructs the reference text using nearest neighbors in the gene feature vector space and then generates a new summary according to this reference text.",
"We further introduce two applications that can be advanced using our dataset.",
"One application aims at evaluating scientific paper similarity according to the similarity of its corresponding data summary, and the other application leverages our dataset to automatically generate masked sentences for scientific paper understanding.",
"To the best of our knowledge, Textomics and k NN-Vec2Text serve as the first large-scale genomics data description benchmark, and we envision it will be broadly applied to other natural language processing and biomedical tasks.",
"On the biomedical side, we provide the benchmark to develop new NLP tools that can generate the description for a genomics data.",
"Since each pub-lic genomics data needs a description, such tools will substantially accelerate this process.",
"Also, descriptions generated from Textomics could contain new knowledge.",
"While humans write the description almost solely based on that single dataset, description generation models jointly consider thousands of datasets, enabling the transfer of knowledge from other datasets.",
"The generated description can guide biologists to write more informative descriptions, which ultimately leads to better and larger genomics description data.",
"When biologists start to obtain the generated description from NLP tools, they will be able to write more informative descriptions with the assistance from these NLP tools.",
"On the NLP side, the relationship between a summary and a dataset is analogous to the relationship between an abstract and a scientific paper.",
"A high-quality summary ideally contains all perspectives of a study, including problems, methods, and discoveries.",
"Moreover, our work will bridge the NLP and the genomics community and motivate people to analyze genomics data using NLP methods based on the multi-modality dataset introduced in this paper.",
"Textomics could also be used to help scientific paper analysis tasks, such as paper recommendation (Bai et al., 2019), citation text generation (Luu et al., 2020), and citation prediction (Suzen et al., 2021).",
"Our method searches for the nearest neighbours by calculating the Euclidean distance between five-number summary vectors of the genomics feature matrices.",
"However, this might lose useful information hidden in the original matrices.",
"It's challenging and worth exploring end-to-end approaches that can learn embeddings from the genomics feature matrices instead of representing them as five-number summary vectors.",
"On the Text2Vec side, one remaining challenge that could be the future direction of our work is to directly generate the whole genomics feature matrix instead of the five-number summary vector.",
"Also, it would be interesting yet challenging to jointly learn the Text2Vec and the Vec2Text tasks, and one potential solution is to further decode the generated vector to reconstruct the embedding of summaries in Text2Vec, and leverage the resulted decoder to predict the embedding of text by using k NN method in the text embedding space.",
"Also, it is interesting to jointly model data from multiple platforms, which might lead to beneficial results by transferring biological insights learned from different platforms."
] | [
"abstain",
"objective",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"method",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"objective",
"other",
"method",
"method",
"other",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution.",
"For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution.",
"In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness.",
"Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subword-level).",
"Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks.",
"To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.",
"Deep neural networks have achieved state-of-the-art results in many NLP tasks, but also have been shown to be brittle to carefully crafted adversarial perturbations, such as replacing words with similar words (Alzantot et al., 2018), adding extra text (Wallace et al., 2019), and replacing sentences with semantically similar sentences (Ribeiro et al., 2018).",
"These adversarial perturbations are imperceptible to humans, but can fool deep neural networks and break their performance.",
"Efficient methods for defending these attacks are of critical imEqual contribution portance for deploying modern deep NLP models to practical automatic AI systems.",
"In this paper, we focus on defending the synonymous word substitution attacking (Alzantot et al., 2018), in which an attacker attempts to alter the output of the model by replacing words in the input sentence with their synonyms according to a synonym table, while keeping the meaning of this sentence unchanged.",
"A model is said to be certified robust if such an attack is guaranteed to fail, no matter how the attacker manipulates the input sentences.",
"Achieving and verifying certified robustness is highly challenging even if the synonym table used by the attacker is known during training (see Jia et al., 2019), because it requires to check every possible synonymous word substitution, whose number is exponentially large.",
"Various defense methods against synonymous word substitution attacks have been developed (e.g., Wallace et al., 2019; Ebrahimi et al., 2018), most of which, however, are not certified robust in that they may eventually be broken by stronger attackers.",
"Recently, Jia et al. (2019); Huang et al. (2019) proposed the first certified robust methods against word substitution attacking.",
"Their methods are based on the interval bound propagation (IBP) method (Dvijotham et al., 2018) which computes the range of the model output by propagating the interval constraints of the inputs layer by layer.",
"However, the IBP-based methods of Jia et al. (2019); Huang et al. (2019) are limited in several ways.",
"First, because IBP only works for certifying neural networks with continuous inputs, the inputs in Jia et al. (2019) and Huang et al. (2019) are taken to be the word embedding vectors of the input sentences, instead of the discrete sentences.",
"This makes it inapplicable to character-level (Zhang et al., 2015) and subword-level (Bojanowski et al., 2017) model, which are more widely used in practice (Wu et al., 2016).",
"In this paper, we propose a structure-free certified defense method that applies to arbitrary models that can be queried in a black-box fashion, without any requirement on the model structures.",
"Our method is based on the idea of randomized smoothing, which smooths the model with random word substitutions build on the synonymous network, and leverage the statistical properties of the randomized ensembles to construct provably certification bounds.",
"Similar ideas of provably certification using randomized smoothing have been developed recently in deep learning (e.g., Cohen et al., 2019; Salman et al., 2019; Zhang et al., 2020; Lee et al., 2019), but mainly for computer vision tasks whose inputs (images) are in a continuous space (Cohen et al., 2019).",
"Our method admits a substantial extension of the randomized smoothing technique to discrete and structured input spaces for NLP.",
"We test our method on various types of NLP models, including text CNN (Kim, 2014), Char-CNN (Zhang et al., 2015), and BERT (Devlin et al., 2019).",
"Our method significantly outperforms the recent IBP-based methods (Jia et al., 2019; Huang et al., 2019) on both IMDB and Amazon text classification.",
"In particular, we achieve an 87.35% certified accuracy on IMDB by applying our method on the state-of-the-art BERT, on which previous certified robust methods are not applicable.",
"In a text classification task, a model f ( X ) maps an input sentence X X to a label c in a set Y of discrete categories, where X = x 1 , . . . , x L is a sentence consisting of L words.",
"In this paper, we focus on adversarial word substitution in which an attacker arbitrarily replaces the words in the sentence by their synonyms according to a synonym table to alert the prediction of the model.",
"Specifically, for any word x , we consider a pre-defined synonym set S x that contains the synonyms of x (including x itself).",
"We assume the synonymous relation is symmetric, that is, x is in the synonym set of all its synonyms.",
"The synonym set S x can be built based on GLOVE (Pennington et al., 2014).",
"With a given input sentence X = x 1 ,.",
".",
". , x L , the attacker may construct an adversarial sentence X (cid:48) = x (cid:48) 1 , . . . , x (cid:48) L by perturbing at most R L words x i in X to any of their synonyms x (cid:48) i S x i , SX := (cid:8) X (cid:48) : (cid:13)(cid:13) X (cid:48) X (cid:13)(cid:13) 0 R, x (cid:48) i S x i , i (cid:9) , where SX denotes the candidate set of adversarial sentences available to the attacker.",
"Here (cid:107) X (cid:48) X (cid:107) 0 := (cid:80) Li =1 I { x (cid:48) i (cid:54) = x i } is the Hamming distance, with I {} the indicator function.",
"It is expected that all X (cid:48) SX have the same semantic meaning as X for human readers, but they may have different outputs from the model.",
"The goal of the attacker is to find X (cid:48) SX such that f ( X ) (cid:54) = f ( X (cid:48) ) .",
"Certified Robustness Formally, a model f is said to be certified robust against word substitution attacking on an input X if it is able to give consistently correct predictions for all the possible word substitution perturbations, i.e, y = f ( X ) = f ( X (cid:48) ) , for all X (cid:48) SX , (1) where y denotes the true label of sentence X .",
"Deciding if f is certified robust can be highly challenging, because, unless additional structural information is available, it requires to exam all the candidate sentences in SX , whose size grows exponentially with R .",
"In this work, we mainly consider the case when R = L , which is the most challenging case.",
"Our idea is to replace f with a more smoothed model that is easier to verify by averaging the outputs of a set of randomly perturbed inputs based on random word substitutions.",
"The smoothed classifier f RS is constructed by introducing random perturbations on the input space, f RS ( X ) = arg max c Y PZ X ( f ( Z ) = c ) , where X is a probability distribution on the input space that prescribes a random perturbation around X .",
"which is the soft score of class c under f RS .",
"The perturbation distribution X should be cho-sen properly so that f RS forms a close approximation to the original model f (i.e., f RS ( X ) f ( X ) ), and is also sufficiently random to ensure that f RS is smooth enough to allow certified robustness (in the sense of Theorem 1 below).",
"In our work, we define X to be the uniform distribution on a set of random word substitutions.",
"Specifically, let P x be a perturbation set for word x in the vocabulary, which is different from the synonym set S x .",
"In this work, we construct P x based on the top K nearest neighbors under the cosine similarity of GLOVE vectors, where K is a hyper-parameter that controls the size of the perturbation set; see Section 4 for more discussion on P x .",
"For a sentence X = x 1 , . . . , x L , the sentence-level perturbation distribution X is defined by randomly and independently perturbing each word x i to a word in its perturbation set P x i with equal probability, that is, X ( Z ) = L (cid:89) i =1 I { z i P x i } | P x i | , where Z = z 1 , . . . , z L is the perturbed sentence and | P x i | denotes the size of P x i .",
"Note that the random perturbation Z and the adversarial candidate X (cid:48) SX are different.",
"We now discuss how to certify the robustness of the smoothed model f RS .",
"Recall that f RS is certified robust if y = f RS ( X (cid:48) ) for any X (cid:48) SX , where y is the true label.",
"A sufficient condition for this is min X (cid:48) SX g RS ( X (cid:48) , y ) max X (cid:48) SX g RS ( X (cid:48) , c ) c (cid:54) = y, where the lower bound of g RS ( X (cid:48) , y ) on X (cid:48) SX is larger than the upper bound of g RS ( X (cid:48) , c ) on X (cid:48) SX for every c (cid:54) = y .",
"Define q x = min x (cid:48) S x | P x P x (cid:48) | / | P x | , where q x indicates the overlap between the two different perturbation sets.",
"Then min X (cid:48) SX g RS ( X (cid:48) , c ) max( g RS ( X , c ) q X , 0) max X (cid:48) SX g RS ( X (cid:48) , c ) min( g RS ( X , c ) + q X , 1) .",
"The key step is hence to calculate the upper and low bounds of g RS ( X (cid:48) , c ) for c Y and X (cid:48) SX , which we address in Theorem 1 below.",
"All proofs are in Appendix A.2.",
"Theorem",
"1. (Certified Lower/Upper Bounds) Assume the perturbation set P x is constructed such that | P x | = | P x (cid:48) | for every word x and its synonym x (cid:48) S x .",
"For a given sentence X = x 1 , . . . , x L , we sort the words according to q x , such that q x i 1 q x i 2 q x iL .",
"where q X := 1 (cid:81) Rj =1 q x ij .",
"Equivalently, this says (cid:12)(cid:12) g RS ( X (cid:48) , c ) g RS ( X , c ) (cid:12) (cid:12) q X , any label c Y .",
"The idea is that, with the randomized smoothing, the difference between g RS ( X (cid:48) , c ) and g RS ( X , c ) is at most q X for any adversarial candidate X (cid:48) SX .",
"Therefore, we can give adversarial upper and lower bounds of g RS ( X (cid:48) , c ) by g RS ( X , c ) q X , which, importantly, avoids the difficult adversarial optimization of g RS ( X (cid:48) , c ) on X (cid:48) SX , and instead just needs to evaluate g RS ( X , c ) at the original input X .",
"We are ready to describe a practical criterion for checking the certified robustness.",
"Therefore, certifying whether the model gives consistently correct prediction reduces to checking if X is positive, which can be easily achieved with Monte Carlo estimation as we show in the sequel.",
"Estimating g RS ( X , c ) and X Recall that g RS ( X , c ) = PZ X ( f ( Z ) = c ) .",
"We can estimate g RS ( X , c ) with a Monte Carlo estimator (cid:80) ni =1 I { f ( Z ( i ) ) = c } /n , where Z ( i ) are i.i.d. samples from X .",
"And X can be approximated accordingly.",
"Using concentration inequality, we can quantify the non-asymptotic approximation error.",
"This allows us to construct rigorous statistical procedures to reject the null hypothesis that f RS is not certified robust at X (i.e., X 0 ) with a given significance level (e.g., 1% ).",
"See Appendix A.1 for the algorithmic details of the testing procedure.",
"We can see that our procedure is structure-free in that it only requires the black-box assessment of the output f ( Z ( i ) ) of the random inputs, and does not require any other structural information of f and f RS , which makes our method widely applicable to various types of complex models.",
"Tightness A key question is if our bounds are sufficiently tight.",
"The next theorem shows that the lower/upper bounds in Theorem 1 are tight and can not be further improved unless further information of the model f or f RS is acquired.",
"Theorem",
"2. (Tightness) Assume the conditions of Theorem 1 hold.",
"For a model f that satisfies f RS ( X ) = y and y B as defined in Proposition 1, there exists a model f such that its related smoothed classifier g RS satisfies g RS ( X , c ) = ...",
"g RS ( X , c ) for c = y and c = y B , and min X (cid:48) SX g RS ( X (cid:48) , y ) = max( g RS ( X , y ) q X , 0) max X (cid:48) SX g RS ( X (cid:48) , y B ) = min( g RS ( X , y B ) + q X , 1) , where q X is defined in Theorem",
"1. In other words, if we access g RS only through the evaluation of g RS ( X , y ) and g RS ( X , y B ) , then the bounds in Theorem 1 are the tightest possible that we can achieve, because we can not distinguish between g RS and the g RS in Theorem 2 with the information available.",
"Figure 1 visualizes the pipeline of the proposed approach.",
"Given the synonym sets SX , we generate the perturbation sets PX from it.",
"When an input sentence X arrives, we draw perturbed sentences { Z ( i ) } from X and average their outputs to estimate X , which is used to decide if the model is certified robust for X .",
"Training the Base Classifier f Our method needs to start with a base classifier f .",
"Although it is possible to train f using standard learning techniques, the result can be improved by considering that the method uses the smoothed f RS , instead of f .",
"To improve the accuracy of f RS , we introduce a data augmentation induced by the perturbation set.",
"Specifically, at each training iteration, we first sample a mini-batch of data points (sentences) and randomly perturbing the sentences using the perturbation distribution X .",
"We then apply gradient descent on the model based on the perturbed mini-batch.",
"Similar training procedures were also used for Gaussian-based random smoothing on continuous inputs (see e.g., Cohen et al., 2019).",
"Our method can easily leverage powerful pre-trained models such as BERT.",
"In this case, BERT is used to construct feature maps and only the top layer weights are finetuned using the data augmentation method.",
"We test our method on both IMDB (Maas et al., 2011) and Amazon (McAuley, 2013) text classification tasks, with various types of models, including text CNN (Kim, 2014), Char-CNN (Zhang et al., 2015) and BERT (Devlin et al., 2019).",
"We compare with the recent IBP-based methods (Jia et al., 2019; Huang et al., 2019) as baselines.",
"Text CNN (Kim, 2014) was used in Jia et al. (2019) and achieves the best result therein.",
"All the baseline models are trained and tuned using the schedules recommended in the corresponding papers.",
"We consider the case when R = L during attacking, which means all words in the sentence can be perturbed simultaneously by the attacker.",
"Code for reproducing our results can be found in https://github.com/ lushleaf/Structure-free-certified-NLP .",
"Synonym Sets Similar to Jia et al. (2019); Alzan-tot et al. (2018), we construct the synonym set S x of word x to be the set of words with 0 .",
"8 cosine similarity in the GLOVE vector space.",
"The word vector space is constructed by post-processing the pre-trained GLOVE vectors (Pennington et al., 2014) using the counter-fitted method (Mrksic et al., 2016) and the all-but-the-top method (Mu and Viswanath, 2018) to ensure that synonyms are near to each other while antonyms are far apart.",
"Perturbation Sets We say that two words x and x (cid:48) are connected synonymously if there exists a path of words x = x 1 , x 2 , . . . , x (cid:96) = x (cid:48) , such that all the successive pairs are synonymous.",
"Let B x to be the set of words connected to x synonymously.",
"Then we define the perturbation set P x to consist of the top K words in B x with the largest GLOVE cosine similarity if | B x | K , and set P x = B x if | B x | < K .",
"Here K is a hyper-parameter that controls the size of P x and hence trades off the smoothness and accuracy of f RS .",
"We use K = 100 by default and investigate its effect in Section 4.2.",
"Evaluation Metric We evaluate the certified robustness of a model f RS on a dataset with the certified accuracy (Cohen et al., 2019), which equals the percentage of data points on which f RS is certified robust, which, for our method, holds when X > 0 can be verified.",
"We first demonstrate that adversarial word substitution is able to give strong attack in our experimental setting.",
"Using IMDB dataset, we attack the vanilla BERT (Devlin et al., 2019) with the adversarial attacking method of Jin et al. (2020).",
"The vanilla BERT achieves a 91% clean accuracy (the testing accuracy on clean data without attacking), but only a 20 .",
"1% adversarial accuracy (the testing accuracy under the particular attacking method by Jin et al. (2020)).",
"We will show later that our method is able to achieve 87 .",
"35% certified accuracy and thus the corresponding adversarial accuracy must be higher or equal to 87 .",
"35% .",
"We compare our method with IBP (Jia et al., 2019; Huang et al., 2019).",
"in Table",
"1. We can see that our method clearly outperforms the baselines.",
"In particular, our approach significantly outperforms IBP on Amazon by improving the 14 .",
"00% baseline to 24 .",
"92% .",
"Thanks to its structure-free property, our algorithm can be easily applied to any pre-trained models and character-level models, which is not easily achievable with Jia et al. (2019) and Huang et al. (2019).",
"Table 2 shows that our method can further improve the result using Char-CNN (a character-level model) and BERT (Devlin et al., 2019), achieving an 87 .",
"35% certified accuracy on IMDB.",
"In comparison, the IBP baseline only achieves a 79 .",
"74% certified accuracy under the same setting.",
"We investigate the trade-off between smoothness and accuracy while tuning K in Table",
"3. We can Method Model Accuracy Jia et al. (2019) CNN 79.74 Huang et al. (2019) CNN 78.74 Ours CNN 81.16 Char-CNN 82.03 BERT 87.35 Table 2: The certified accuracy of different models and methods on the IMDB dataset.",
"see that the clean accuracy decreases when K increases, while the gap between the clean accuracy and certified accuracy, which measures the smoothness, decreases when K increases.",
"The best certified accuracy is achieved when K = 100 .",
"We proposed a robustness certification method, which provably guarantees that all the possible perturbations cannot break down the system.",
"Compared with previous work such as Jia et al. (2019); Huang et al. (2019), our method is structure-free and thus can be easily applied to any pre-trained models (such as BERT) and character-level models (such as Char-CNN).",
"The construction of the perturbation set is of critical importance to our method.",
"In this paper, we used a heuristic way based on the synonym network to construct the perturbation set, which may not be optimal.",
"In further work, we will explore more efficient ways for constructing the perturbation set.",
"We also plan to generalize our approach to achieve certified robustness against other types of adversarial attacks in NLP, such as the out-of-list attack.",
"An nave way is to add the OOV token into the synonyms set of every word, but potentially better procedures can be further explored.",
"This work is supported in part by NSF CRII 1830161 and NSF CAREER 1846421."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"objective",
"result",
"abstain",
"other"
] |
[
"Emojis have become ubiquitous in digital communication, due to their visual appeal as well as their ability to vividly convey human emotion, among other factors.",
"This also leads to an increased need for systems and tools to operate on text containing emojis.",
"In this study, we assess this support by considering test sets of tweets with emojis, based on which we perform a series of experiments investigating the ability of prominent NLP and text processing tools to adequately process them.",
"In particular, we consider tokenization, part-of-speech tagging, dependency parsing, as well as sentiment analysis.",
"Our findings show that many systems still have notable shortcomings when operating on text containing emojis.",
"In our modern digital era, interpersonal communication often takes place via online channels such as instant messaging, email, social media, etc.",
"This entails an increasing need for tools that operate on the resulting digital data.",
"For instance, online conversations can be invaluable sources of insights that reveal fine-grained consumer preferences with regard to products, services, or businesses (Dong and de Melo, 2018).",
"However, the shifts in modality and medium also shape the way we express ourselves, making it increasingly natural for us to embed emojis, images, hashtags into our conversations.",
"In this paper, we focus specifically on emojis, which have recently become fairly ubiquitous in digital communication, with a 2017 study reporting 5 billion emojis being sent daily just on Facebook Messenger (Burge, 2017).",
"Emojis are textual elements that are encoded as characters but rendered as small digital images or icons that can be used to express an idea or emotion.",
"Goals.",
"Due to their increasing prominence, there is a growing need to properly handle emojis whenever one deals with text.",
"We consider a set of popular NLP tools and empirically assess to what extent they support emojis across a set of standard tasks, encompassing tokenization, part-of-speech tagging, dependency parsing, and sentiment analysis.",
"Although emojis can be encoded as Unicode characters, there are unique properties of emoji encoding that merit special consideration, such as skin tone modifiers and composite emoji incorporating multiple basic emojis.",
"Moreover, text harboring emojis may adhere to subtly different conventions than more traditional forms of text, e.g., with regard to token and sentence boundaries.",
"Emojis can take the place of words with different parts-of-speech and assume different grammatical roles.",
"Finally, emojis may of course also alter the semantics of the text, which in turn may, for instance, affect its sentiment polarity.",
"Overview.",
"For our analysis, we draw primarily on social media and study diverse forms of emoji use.",
"We run a series of experiments on such data evaluating each NLP tool to observe its behaviour at different stages in the processing pipeline.",
"The results show that current tools have notable deficiencies in coping with modern emoji use in text.",
"While emoji characters have a long history, they have substantially grown in popularity since their incorporation into Unicode 6.0 in 2010 followed by increasing support for them on mobile devices.",
"Accordingly, numerous studies have sought to explain how the broad availability of emojis has affected human communication, considering grammatical, semantic, as well as pragmatic aspects (Kaye et al., 2017; McCulloch, 2019).",
"Only few studies have specifically considered some of the more advanced technical possibilities that the Unicode standard affords, such as zero width joiners to express more complex concepts.",
"For instance, with regard to emoji skin tone modifiers, Robertson et al. (2020) study in depth how the use of such modifiers varies on social media, including cases of users modulating their skintone, i.e., using a different tone than the one they usually pick.",
"Given the widespread use of emojis in everyday communication, there is an increasing need for NLP tools that can handle them.",
"Prominent NLP toolkits such as Stanford's Stanza (Qi et al., 2020) and NLTK (Bird et al., 2009) power a wide range of user-facing applications.",
"A number of reports compare the pros and cons of popular NLP libraries (Wolff, 2020; Kozaczko, 2018; Choudhury, 2019; Bilyk, 2020), but these primarily consider the features and popularity of the tools, as well as their performance.",
"There have not been studies assessing them with regard to their ability to cope with modern emoji-laden text.",
"Since emojis are becoming increasingly ubiquitous, it is crucial for developers and institutions deploying such software to know whether it can properly handle the kinds of text that nowadays may quite likely arrive as input data.",
"In many real-world settings, applications and services are expected to operate on text containing emojis, and thus it is important to investigate these capabilities.",
"Many academic studies present new models for particular NLP tasks relating to emojis.",
"For instance, Felbo et al. (2017) developed an emoji prediction model for tweets.",
"Weerasooriya et al. (2016) discussed how to extract essential keywords from a tweet using NLP tools.",
"Cohn et al. (2019) attempted to understand the use of emojis from a grammatical perspective, seeking to determine the parts-of-speech of emoji occurrences in a sentence or tweet.",
"Owoputi et al. (2013) proposed an improved part-of-speech tagging model for online conversational text based on word clusters.",
"Proisl (2018) developed a part-of-speech tagger for German social media and Kong et al. (2014) developed a dependency parser for English tweets.",
"However, such work mostly targets just one specific task and is typically not well-integrated with common open source toolkits, which we focus on in our study.",
"As we wish to assess the support of emojis provided by different text processing tools, we first consider some of the different cases of emoji use that one may encounter, in order to compile relevant data.",
"Emojis can appear in a sentence or tweet in different circumstances.",
"They may show up at the beginning or at the end of a tweet.",
"Likewise, they may appear as part of a series of emojis separated by spaces, or can be clustered within a text without any interleaved spacing.",
"Based on observations on a collection of tweets crawled from Twitter (Shoeb et al., 2019), we defined a series of cases distinguishing different aspects of emoji use, including the number of emojis (i.e., single emojis vs. multiple emojis), position of emojis, the use of skin tone modifiers, and so on.",
"For skin tone emojis, the Unicode standard adopts the Fitzpatrick Scale (Fitzpatrick, 1975), according to which the skin tone for selected emojis can be modulated with five different color settings: Light Skin Tone (e.g., ), Medium-Light Skin Tone (e.g., ), Medium Skin Tone (e.g., ), Medium-Dark Skin Tone (e.g., ), and Dark Skin Tone (e.g., ).",
"Internally, an Emoji Modifier Sequence is assumed when a modifier character follows a supported base emoji character, resulting in a single emoji with skin tone.",
"Some characters now classified emojis are encoded in Plane 0, the Basic Multilingual Plane, where 16 bits suffice to encode individual characters.",
"However, the majority of emojis reside in Plane 1, the Supplementary Multilingual Plane, which in the past had mainly been reserved for rare historic scripts.",
"When including the latter, individual characters can no longer be encoded directly within just 16 bits.",
"Hence, we consider whether a tool handles both non-BMP and BMP emojis.",
"Emojis with Zero Width Joiner (ZWJ) join two or more other characters together in sequence to compose a new one.",
"Popular emoji ZWJ sequences include group ones such as the family emoji , consisting in this case of Man, Woman, Girl, Boy emojis, and encoded by combining Man, the U+200D ZWJ code, Woman, U+200D again, Girl, U+200D, and finally Boy.",
"These are rendered as a single emoji on supported platforms.",
"Given the different cases of emoji use discussed above, we searched for relevant examples in a collection of tweets that we compiled earlier from Twitter (Shoeb et al., 2019).",
"The purpose of this endeavor was to assemble a collection of tweets based on a set of most frequently used emojis so that ev-Tweets Count % Total 22.3 M 100 Unique 21.4 M 95.84 No more than 5 tweets from one user 20.8 M 93.27 Only single emoji 5.67 M 25.38 Multiple emojis 16.48 M 73.77 Skin tone modifiers emojis 1.31 M 5.85 Light skin tone emojis 382 K 1.71 Medium light skin tone emojis 386 K 1.73 Medium skin tone emojis 337 K 1.51 Medium dark skin Tone emojis 274 K 1.23 Dark skin tone emojis 53 K 0.24 Zero Width Joiner (ZWJ) emojis 97 K 0.43 Table 1: Emoji Centric Twitter Corpus statistics the distribution of emojis over the ~22 million tweets with regard to the considered emoji use in text ery single tweet contains at least one emoji.",
"The popularity of the emojis was determined using Novak et al. (2015) and Emoji Tracker 1 , a website that monitors the use of emojis on Twitter in real time.",
"In total, we obtained a set of 22.3 million tweets over a span of one year.",
"This collection, named as EmoTag, is readily available online 2 .",
"Table 1 provides corresponding statistics of our collection, showing that even rare phenomena do occur in substantial numbers of tweets.",
"Next, we chose representative samples for each case.",
"We restricted our search to English language tweets and ensured that not all tweets simply consisted of URLs or mentions.",
"The latter are fairly common on social media, and since it would not be very uncommon for a text processing tool to encounter them in tweets, we did also incorporate a few such tweets along with tweets containing genuine text.",
"Ultimately, we obtain a diverse collection of short input texts, including different skintones, ZWJ emojis, and other cases mentioned in Section 3.1 and Table 1.",
"We drew upon the compiled input texts for assessments with regard to different NLP tasks.",
"The following sections describe each of the considered tasks, i.e., Tokenization (Section 4), Part-of-Speech Tagging (Section 5), Dependency Parsing (Section 6), and Sentiment Analysis (Section 7) separately.",
"The full dataset for the following experiments can be found at http://emoji.nlproc.org .",
"Tokenization is the act of breaking up a sequence of strings into a sequence of basic pieces such as",
"words, keywords, phrases, symbols, and other elements, referred to as tokens.",
"In the process of tokenization, some characters such as punctuation marks may be discarded.",
"It is important for a tokenizer to generate meaningful results, as the output of this step becomes the input for subsequent processing steps such as parsing and text mining in the pipeline.",
"In our study, we expect a tokenizer to segment a text into tokens such as words, emojis, and other special characters.",
"While tokenizing a sentence, or a tweet with emojis, in particular, we focus on the position and type of emojis presented earlier in Section",
"3. An emoji can accompany a word with both leading and trailing spaces, or it can be attached to words without any separating whitespace.",
"We typically expect a tokenizer to distinguish an emoji from a word even in the absence of a space delimiter if it appears to constitute a separate concept.",
"The same principle should be followed for emoji clusters, i.e., if multiple emojis occur in a sequence such as , they are expected to be recognized as individual tokens.",
"Another aspect of successful tokenization is adequately handling emoji skin tone modifiers.",
"As emojis can have five different skin tone modifiers, we ensure that our test data contains the same number of tweets from all skin tones.",
"An ideal tokenizer should not split skin tone emoji into two individual characters.",
"For example, the Waving Hand Light Skin Tone emoji should not be split into a regular Waving Hand emoji and a tone modifier .",
"We also test the abilities of tools in terms of handling ZWJ emoji sequences.",
"We randomly pick a small set of tweets containing ZWJ sequences for this purpose.",
"For example, an ideal tokenizer should not split up a Family Emoji as four individual emojis such as Man, Woman, Girl, Boy, as the emoji is meant to be rendered as a single one.",
"Note that some tokenizers discard punctuation during the tokenization process, while others retain them as tokens.",
"For example, Gensim removes all punctuation, including all emojis.",
"Furthermore, the NLTK Tweet Tokenizer does not split up a hashtag as # followed by a word, but rather keeps it intact, as hashtags usually convey meaningful information in tweets.",
"Thus, to generalize the tokenization process across tools, we apply certain post-processing techniques before comparing the list of tokens with Task Tokenization Tools SE ME STE BMP NB ZWJ Gensim 0 0 0 0 0 0 NLTK 70 0 68 70 80 70 NLTK-TT 100 100 0 100 100 0 PyNLPl 90 0 68 60 80 70 SpaCy 100 100 0 100 100 0 SpaCyMoji 100 100 92 100 100 10 Stanza 80 10 70 80 100 40 TextBlob 70 0 68 70 80 70 Table 2: Tokenization accuracy (%) of tools for different test set subsets.",
"the expected list.",
"One such technique is to discard all punctuation from the list of tokens, while for #hashtag occurrences, we treat both hashtag and #hashtag as valid options.",
"Tools.",
"In total, we consider 8 libraries for our experiments.",
"These are the regular English tokenizer of the Natural Language Toolkit (NLTK) by Bird et al. (2009), the NLTK Tweet Tokenizer (i.e., its Twitter-aware tokenizer), the Stanford NLP Group's Stanza (formerly known as StanfordNLP) (Qi et al., 2020), SpaCy and SpaCyMoji, PyNLPl (the Python library for Natural Language Processing, pronounced as pineapple ), Gensim (ehek and Sojka, 2010), TextBlob, and AllenNLP (Gard-ner et al., 2018).",
"3 4.2 Results Table 2 presents the results of tokenizing the given case-specific test data, based on an overall set of 100 input texts.",
"We partitioned this test data with regard to different cases of emoji use for a more fine-grained analysis.",
"For single emoji (SE), intended to be the simplest case, where each input cannot contain more than one emoji, we observe that most tools except for Gensim obtain acceptable results.",
"Since Gensim discards emoji characters, it also fails all other test cases.",
"In contrast, both SpaCy and SpaCyMoji achieve 100% accuracy.",
"Other tools may fail to segment off emojis that have been attached to words without whitespace.",
"The multiple emojis (ME) case considers inputs with more than a single emoji, including clusters of emojis.",
"Some tools, such as NLTK and PyNLPl , 3 We rely on Python 3.8 along with the latest version of all tools (Gensim 3.8.3, NLTK 3.4.5, PyNLPl 1.2.9, SpaCy 2.2.4, SpaCyMoji 2.0.0, Stanza 1.1.1, TextBlob 0.15.3) available until November 2020.",
"failed for this part despite having done well on single emoji utterances.",
"Apart from separating off emojis from words, tools here differ mostly based on whether they split up groups of emojis.",
"For skin tone emojis, there are 50 test cases with skin tones.",
"Note that these can have single or multiple emojis, but it is ensured that they bear at least one skin tone emoji.",
"In some cases, the problems are the same as for regular emojis, e.g., splitting off emojis from words.",
"However, some tools generally split off skin tone modifiers from the emojis they are intended to modify.",
"Stanza only breaks a color tone emoji into the base emojis and tone modifiers when it is concatenated with text.",
"Otherwise it can handle a skin tone emoji without splitting it.",
"SpaCyMoji obtains a near-perfect result but still does not manage to preserve all skintone emojis.",
"The next test is designed to assess Basic Multilingual Plane (BMP) and non-BMP emojis, respectively.",
"For each of these cases, a distinct set of 10 tweets was used to assess the performance.",
"Interestingly, non-BMP emojis appear to be better-supported, presumably because they include the most popular emojis.",
"Finally, we consider emojis with zero width joiners (ZJW), where each tweet contains no more than two emojis with at least one ZWJ emoji.",
"The tools that fail in this case, such as NLTK-TT , instead of preserving a ZJW emoji such as , produce multiple separate tokens, including the Unicode zero-width joiners as individual tokens, e.g., , U+200D, , U+200D, , U+200D, and .",
"In fact, none of the tools could achieve 100% accuracy across all ZWJ emojis.",
"This is because they may fail when a regular emoji and a ZWJ one appear together.",
"For example, one of the inputs contains the emojis , which NLTK treats as a single token, although it successfully handles other ZWJ emojis when they are space-separated.",
"In contrast, NLTK-TT appears to be the best option for dealing with emoji clusters, but when it comes to ZWJ emojis, it separates all emojis and joiners.",
"Part-of-Speech (POS) tagging is the process of assigning each token a label that reflects its word class.",
"This may be with respect to traditional parts of speech, such as noun, verb, adjective, etc., or using a more fine-grained inventory of classes.",
"To understand how different POS taggers handle emojis in a sentence, we evaluate all tools for a subset of inputs covering the majority of emoji scenarios mentioned in Section",
"3. For evaluation, we compiled a set of 23 real tweets, in which emojis are used as different parts-of-speech, namely as nouns, adjectives, verbs, adverbs, or as punctuation.",
"We mapped the original part-of-speech tags to these coarse-grained categories and then checked for correctness with regard to human annotations obtained for our tweets.",
"Only the part-of-speech tags assigned to the emojis were considered, while the tagging of all other non-emoji tokens was deemed irrelevant for the purposes of this experiment.",
"Note also that this test suite is limited to clear-cut cases of emojis used within sentences and we do not claim that every potential use of an emoji has an obvious well-defined part-of-speech tag.",
"Tools.",
"For this task, we evaluated all tools except Gensim and PyNLPl , as they do not directly offer any POS tagging functionality.",
"Since tokenization is a prerequisite for POS tagging, a tool is likely to fail to correctly tag a word or emoji if the emoji is not properly tokenized in the preceding step.",
"However, for a more extensive evaluation, we considered two setups.",
"First, we conducted the POS tagging experiment based on the output of the integrated tokenizer of the respective tool.",
"Thus, if a tool was unable to tokenize Emojis are as three separate tokens Emojis, , and are, we still proceeded with the task treating it as one token for the respective tool's POS tagger.",
"Subsequently, we conducted the POS tagging experiment while considering a unified ground truth tokenization as input for all tools.",
"For example, in the case of Emojis are, the tagger could expect to receive them as separate tokens Emojis, , and are.",
"Table 3 reports the results of our part-of-speech tagging experiments.",
"The final two columns summarize the results with the original tokenizer and the modified tokenizer.",
"None of the tools in our experiment could handle the case of emojis acting as adverbs or as punctuation.",
"For instance, My Credit Score Went 7 Points is one such example where the Upwards Button emoji assumes an adverbial role, which none of the taggers recognize, despite the emoji being space-delimited.",
"Similarly, occurrences of the question mark emoji or double exclamation mark emoji used as punctuation are labeled as nouns by all considered tools.",
"Interestingly, we obtained a 100% success rate for handling verb emojis, except with NLTK .",
"Although the latter is the only tool that passes all test cases for noun emojis, it fails for all other cases.",
"Overall, NLTK-TT and Stanza obtain the highest success rates as reported in the penultimate column of the table.",
"When considering the harmonized ground truth tokenization, as reported in the final column of Table 3, the results for TextBlob are boosted significantly and for Stanza a more modest gain is observed.",
"TextBlob and Stanza for instance may fail when emojis are not separated by whitespace from regular words (e.g., love ) or from another emoji (e.g., ).",
"Rectifying the tokenization in such cases improves the results of both tools.",
"The first example in Table 4 shows the interesting phenomenon of redundancy causing incorrect predictions.",
"In this tweet, both the dog emoji and the cat emoji are expected to be tagged as nouns, but Stanza assumes the former to be an adjective due to the additional presence of the regular word dog.",
"To examine this further, we also considered several modifications of the original tweet.",
"First, we considered the tweet without the additional word dog word after the dog emoji , in which case Stanza can easily identify it as a noun.",
"This is reported in the second row of Table",
"4. We also tried replacing the dog emoji with the word dog to see if Stanza can cope with erroneous word reduplication, and it turned out that Stanza could correctly identify both occurrences as nouns.",
"Finally, we considered replacing the word dog with another emoji.",
"In this case, the tool marked the first as a noun and the second as punctuation.",
"In dependency grammar, the syntactic structure of a sentence is described as a tree capturing relationships between head words and dependent words.",
"Given that emojis can have different grammatical roles within a sentence, we thus assessed to what extent popular dependency parsers are affected by the presence of emojis in the input.",
"6.1 Task Setup We rely on the English Web Treebank (EWT), one of the around 200 treebanks in the Universal Dependency (UD) collection 4 , which seeks to define a consistent annotation of grammar (including parts of speech, morphological features, and syntactic dependencies) across over 100 languages.",
"The English Web Treebank UD corpus provides gold standard Universal Dependency annotations , built over the source material of the English Web Treebank (Bies et al., 2012).",
"We randomly pick a set of sentences from EWT and then replace certain obvious words with matching emojis in both the plain text sentences and their corresponding dependency trees to obtain a ground truth set.",
"Examples of such wordemoji replacements include fire , death , salad , etc.",
"To further examine the robustness of the tools, we also incorporate multi-emojis, skin tone emojis, and ZWJ emojis in the input.",
"For instance, one of the EWT sentences includes Chicken salad salad is great too. for which we embed the emoji as Chicken is great too..",
"The purpose of this approach is to assess how well a dependency parser can handle such forms of emoji use.",
"Again, our test suite is limited to clear-cut instances and we do not make the assumption that any possible emoji use will have an unambiguous well-defined ground truth annotation.",
"Tools.",
"Not all of the previously considered tools provide their own dependency parser.",
"For this evaluation, we thus considered only Stanford's CoreNLP , SpaCy , and Stanza .",
"Table 5 reports both Labeled Attachment Score (LAS) and Unlabeled Attachment Score (UAS) results, where the latter consider just the location of the edge, i.e., just the structure of the tree, while the former as well mandate that the edge labels be identified correctly.",
"The first two columns report the average attachment score based on the entire dependency tree for the given set of inputs.",
"The next two columns (i.e. Single Emoji Sub-tree) consider only the parent and child nodes of emojis in the tree, i.e., an emoji-centered sub-tree (or forest).",
"Finally, the last two columns report LAS and UAS 4 Version 2.7 https://universaldependencies.org/ #download Single Emoji Single Emoji Sub-tree Multi Emoji Tools LAS UAS LAS UAS LAS UAS CoreNLP 0.729 0.766 0.631 0.715 0.625 0.655 SpaCy 0.359 0.459 0.211 0.256 0.311 0.401 Stanza 0.821 0.836 0.758 0.796 0.725 0.733 Table 5: Labeled Attachment Score (LAS) and Unlabeled Attachment Score (UAS) of Dependency Parser based on the English Web Treebanks (EWT) for complex emojis including multi-emojis, skin tone emojis, and ZWJ emojis in the given testcases.",
"The results show a clear degradation of both the tree structure (UAS) and the dependency labels (LAS) when it comes to tackling edges in the graph connecting other tokens to emojis.",
"This becomes more evident with the presence of complex emojis in the tree.",
"In general, Stanford CoreNLP and Stanza appear to be more robust than SpaCy .",
"Although the word emoji is not etymologically related to the word emotion, several studies show how emojis can help to express emotions (Shoeb and de Melo, 2020) and sentiment in textual communication (Novak et al., 2015).",
"Keeping this in mind, we further assessed how well NLP tools fare at the task of predicting the sentiment polarity of a text harboring emojis.",
"Table 6 shows examples of texts with different emojis.",
"While the text alone may be ambiguous with respect to its sentiment polarity, the emoji appears to eliminate much of the ambiguity if it is appended to the end of the text.",
"The goal of this endeavor is to examine if the sentiment polarity is predicted correctly when a high-intensity emoji is incorporated into a neutral sentence.",
"For this task, we leverage a set of custom sentences and tweets from the Sentiment140 dataset (Go et al., 2009).",
"We considered a set of texts with neutral or ambiguous sentiment.",
"The sentiment label was verified by multiple tools before considering them in our experiment.",
"A sentence as well as a tweet were only considered when their sentiment labels were consistent across multiple tools.",
"Although the specific sentiment score may vary from one tool to another, we ensured that the sentiment label remained consistent.",
"Each example was then modified with both positive and negative emojis appended to the end, giving us the opportunity to observe whether the predicted polarity of the original sentence changes in accordance with the polarity of the emojis.",
"For example, I'll explain it later is a neutral sentence that is modified either with a positive emoji or with a negative one such as .",
"We use different sets of positive and negative emojis to modify the sentiment of the text, covering a broad spectrum of the sentiment polarity of emojis.",
"The sentiment of emojis was determined based on the data by Novak et al. (2015).",
"Tools.",
"Although many tools could be trained on a labeled set of tweets, we sought to assess preexisting systems as they are often used out-of-the-box without additional training or fine-tuning.",
"Hence, this study considers NLTK and TextBlob , as they can readily be used on the fly without requiring new labeled data.",
"Note that TextBlob 's sentiment module contains two sentiment analyzers, PatternAnalyzer and NaiveBayesAnalyzer, the latter trained on movie reviews.",
"For NLTK , we use VADER (Valence Aware Dictionary and sEntiment Reasoner), a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiment as expressed in social media.",
"Additionally, we evaluate the standalone VADER library directly as it is meant to support emoji sentiment (Hutto and Gilbert, 2014).",
"The results are given in Table",
"7. In the sentiment prediction task for a given tweet with emojis, neither NLTK nor the TextBlob models appear to be able to consider the emojis as part of their sentiment polarity prediction.",
"Only the stand-alone VADER library is able to discern any difference when positive or negative emojis are provided with the sentence, as reflected in the final row of Table",
"7. The discrepancies between NLTK's VADER component and the stand-alone VADER stem from differences in the lexicon used by the tools.",
"The stand-alone VADER includes a dedicated emoji lexicon that is omitted in the NLTK version.",
"Some studies (Jain et al., 2019) show that an emoji can moderate the sentiment of a given tweet if the sentiment of an emoji is considered during training.",
"Clearly, systems trained on emoji-bearing data can learn to consider them during prediction if their tokenization is handled properly and they are not discarded during preprocessing.",
"However, given the importance of emojis in conveying sentiment, it appears that most out-of-the-box tools ought to consider emojis as well.",
"Overall, based on Table 9, we can see that none of the considered tools perfectly handles all evaluated tasks with emojis.",
"Indeed, many text preprocessing pipelines, especially deep learning ones with a limited vocabulary, routinely discard emojis along with punctuation characters as non-standard characters.",
"Gensim by default follows this common approach, which is likely suboptimal for emojis.",
"NLTK-TT as well as Stanza help keep track of hashtags as they retain them with the # sign intact, whereas other tools split them up as two individual tokens or remove the #.",
"NLTK , Stanza , and TextBlob fail to tokenize emojis if emojis are tied up with other words, while SpaCy , SpaCyMoji , and NLTK-TT handle such cases.",
"Note that accurate tokenization, e.g., splitting off emojis attached to words, can also be a prerequisite for many downstream tasks, such as enabling higher-quality text classification and information retrieval.",
"For POS tagging, somewhat surprisingly, almost all tools did well with verbs, while they all struggled with punctuation emojis as well as adverbs.",
"The results for adjectives were as well quite mixed.",
"Overall, NLTK-TT and TextBlob achieved the highest success rate for POS tagging, although both still struggle with adverbs and punctuation, which can also lead to adverse effects in downstream tasks such as syntactic parsing.",
"Moreover, TextBlob requires the use of a modified tokenizer.",
"For dependency parsing, we found Stanford CoreNLP and Stanza to be the most robust in correctly assessing emojis.",
"SpaCy, in contrast, does not appear to generalize well enough to lexical items such as emojis that may be lacking in the training data.",
"In general, there is a need for dependency parsers to be trained on more diverse data.",
"Thus, in practice one may wish to consider a mix-and-match approach, using a tokenizer from one library, a tagger from another, and a dependency parser from yet another library.",
"In our POS tagging and dependency parsing evaluations, we sought to study clear-cut cases to observe whether tools have basic support for emojis.",
"Further discussion is necessary on recommended annotation schemes for more diverse forms of emoji Tools SE GE STE BMP ZWJ Gensim (cid:55) (cid:55) (cid:55) (cid:55) (cid:55) NLTKPyNLPl (cid:51) (cid:55) (cid:51) (cid:51) (cid:51) StanzaTextBlobAllenNLPNLTK-TT (cid:51) (cid:51) (cid:55) (cid:51) (cid:55) SpaCy SpaCyMoji (cid:51) (cid:51) (cid:51) (cid:51) (cid:55) Table 9: An overview of popular text processing NLP tools and their emoji support.",
"use for which the ground truth may not be as obvious.",
"Some researchers argue that the default tagging of emoji should be as adverbials, interjections, or punctuation (Grosz et al., 2021).",
"Similarly, emojis are syntactically comparable to free adjuncts, which constrains the set of valid parse trees.",
"Hence, further work is necessary to devise broader-coverage benchmarks for the tasks considered in our study.",
"Semantic Associations.",
"Finally, we also inspected semantic associations for particular kinds of emojis.",
"We considered a 300-dimensional word2vec SGNS model trained on the EmoTag (Shoeb et al., 2019) dataset, and generated a set of nearest neighbours for selected target emojis.",
"Table 8 reports the nearest emoji neighbours for different skin tone variants of the Clapping Hand emoji.",
"Most of the top 5 neighbours for each emoji bear the same skin tone color except one each for Medium Light and Medium tone emojis reported in Rows 4 and 5, respectively.",
"We conjecture that speakers who use skin tone modifiers frequently also use additional emojis that support such modification and that they naturally tend to use the respective modifier fairly consistently.",
"The last row of the same table shows the nearest neighbours for a ZWJ family emoji.",
"All of the nearest neighbours of this ZWJ emoji contain a ZWJ sequence as well, suggesting that they occur in similar contexts.",
"Emojis have become an integral part of modern interpersonal communication and text encountered in chat messages, social media, or emails is often laden with emojis.",
"Hence, it is important to endow NLP tools with emoji support not only to obtain a deeper understanding of this wealth of data but also to properly preserve and process them correctly.",
"In this study, we assessed how well prominent NLP tools cope with text containing emoji characters.",
"To this end, we evaluated a set of tools on three different tasks across a range of challenging test sets capturing particular phenomena and encodings.",
"Our study demonstrates that there are notable shortcomings in widely used NLP tools.",
"Although many tools are partially capable of operating on emojis, none of them proved fully equipped to tackle the full set of aspects considered in our study.",
"Hence, special care needs to be taken when developing applications that may encounter emojis."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain"
] |
[
"Although transfer learning has been shown to be successful for tasks like object and speech recognition, its applicability to question answering (QA) has yet to be well-studied.",
"In this paper, we conduct extensive experiments to investigate the transferability of knowledge learned from a source QA dataset to a target dataset using two QA models.",
"The performance of both models on a TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson et al., 2013) is significantly improved via a simple transfer learning technique from MovieQA (Tapaswi et al., 2016).",
"In particular, one of the models achieves the state-of-the-art on all target datasets; for the TOEFL listening comprehension test, it outperforms the previous best model by 7%.",
"Finally, we show that transfer learning is helpful even in unsupervised scenarios when correct answers for target QA dataset examples are not available.",
"One of the most important characteristics of an intelligent system is to understand stories like humans do.",
"A story is a sequence of sentences, and can be in the form of plain text (Trischler et al., 2017; Rajpurkar et al., 2016; Weston et al., 2016; Yang et al., 2015) or spoken content (Tseng et al., 2016), where the latter usually requires the spoken content to be first transcribed into text by automatic speech recognition (ASR), and the model will subsequently process the ASR output.",
"To evaluate the extent of the model's understanding of the story, it is asked to answer questions about the story.",
"Such a task is referred to as question answering (QA), and has been a long-standing yet challenging problem in natural language processing (NLP).",
"Several QA scenarios and datasets have been introduced over the past few years.",
"These scenarios differ from each other in various ways, including the length of the story, the format of the answer, and the size of the training set.",
"In this work, we focus on context-aware multi-choice QA, where the answer to each question can be obtained by referring to its accompanying story, and each question comes with a set of answer choices with only one correct answer.",
"The answer choices are in the form of open, natural language sentences.",
"To correctly answer the question, the model is required to understand and reason about the relationship between the sentences in the story.",
"Transfer learning (Pan and Yang, 2010) is a vital machine learning technique that aims to use the knowledge learned from one task and apply it to a different, but related, task in order to either reduce the necessary fine-tuning data size or improve performance.",
"Transfer learning, also known as domain adaptation 1 , has achieved success in numerous domains such as computer vision (Sharif Razavian et al., 2014), ASR (Doulaty et al., 2015; Huang et al., 2013), and NLP (Zhang et al., 2017; Mou et al., 2016).",
"In computer vision, deep neural networks trained on a large-scale image classification dataset such as ImageNet (Rus-sakovsky et al., 2015) have proven to be excellent feature extractors for a broad range of visual tasks such as image captioning (Lu et al., 2017; Karpathy and Fei-Fei, 2015; Fang et al., 2015) and visual 1 In this paper, we do not distinguish conceptually between transfer learning and domain adaptation.",
"A domain' in the sense we use throughout this paper is defined by datasets.",
"question answering (Xu and Saenko, 2016; Fukui et al., 2016; Yang et al., 2016; Antol et al., 2015), among others.",
"In NLP, transfer learning has also been successfully applied to tasks like sequence tagging (Yang et al., 2017), syntactic parsing (Mc-Closky et al., 2010) and named entity recognition (Chiticariu et al., 2010), among others.",
"Although transfer learning has been successfully applied to various applications, its applicability to QA has yet to be well-studied.",
"In this paper, we tackle the TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richard-son et al., 2013) with transfer learning from MovieQA (Tapaswi et al., 2016) using two existing QA models.",
"Both models are pre-trained on MovieQA and then fine-tuned on each target dataset, so that their performance on the two target datasets are significantly improved.",
"In particular, one of the models achieves the state-of-the-art on all target datasets; for the TOEFL listening comprehension test, it outperforms the previous best model by 7%.",
"Transfer learning without any labeled data from the target domain is referred to as unsupervised transfer learning.",
"Motivated by the success of unsupervised transfer learning for speaker adaptation (Chen et al., 2011; Wallace et al., 2009) and spoken document summarization (Lee et al., 2013), we further investigate whether unsupervised transfer learning is feasible for QA.",
"Although not well studied in general, transfer Learning for QA has been explored recently.",
"To the best of our knowledge, Kadlec et al. (2016) is the first work that attempted to apply transfer learning for machine comprehension.",
"The authors showed only limited transfer between two QA tasks, but the transferred system was still significantly better than a random baseline.",
"Wiese et al. (2017) tackled a more specific task of biomedical QA with transfer learning from a large-scale dataset.",
"The work most similar to ours is by Min et al. (2017), where the authors used a simple transfer learning technique and achieved significantly better performance.",
"However, none of these works study unsupervised transfer learning, which is especially crucial when the target dataset is small.",
"Golub et al. (2017) proposed a two-stage synthesis network that can generate synthetic questions and answers to augment insuffi-cient training data without annotations.",
"Among several existing QA settings, in this work we focus on multi-choice QA (MCQA).",
"We are particularly interested in understanding whether a QA model can perform better on one MCQA dataset with knowledge transferred from another MCQA dataset.",
"In Section 2.1, we first formalize the task of MCQA.",
"We then describe the procedures for transfer learning from one dataset to another in Section 2.2.",
"We consider two kinds of settings for transfer learning in this paper, one is supervised and the other is unsupervised.",
"In MCQA, the inputs to the model are a story, a question, and several answer choices.",
"The story, denoted by S , is a list of sentences, where each of the sentences is a sequence of words from a vocabulary set V .",
"The question and each of the answer choices, denoted by Q and C , are both single sentences also composed of words from V .",
"The QA model aims to choose one correct answer from multiple answer choices based on the information provided in S and Q .",
"The procedure of transfer learning in this work is straightforward and includes two steps.",
"The first step is to pre-train the model on one MCQA dataset referred to as the source task, which usually contains abundant training data.",
"The second step is to fine-tune the same model on the other MCQA dataset, which is referred to as the target task, that we actually care about, but that usually contains much less training data.",
"The effectiveness of transfer learning is evaluated by the model's performance on the target task.",
"In supervised transfer learning, both the source and target datasets provide the correct answer to each question during pre-training and fine-tuning, and the QA model is guided by the correct answer to optimize its objective function in a supervised manner in both stages.",
"We also consider unsupervised transfer learning where the correct answer to each question in the target dataset is not available.",
"In other words, the entire process is supervised during pre-training, but unsupervised during fine-tuning.",
"A self-labeling technique inspired by Lee et al. (2013); Chen et al. (2011); Wallace et al. (2009) is used during fine-tuning on the target dataset.",
"We present the proposed algorithm for unsupervised transfer learning in Algorithm 1.",
"Algorithm 1 Unsupervised QA Transfer Learning Input: Source dataset with correct answer to each question; Target dataset without any answer; Number of training epochs.",
"Output: Optimal QA model M 1: Pre-train QA model M on the source dataset.",
"2: repeat 3: For each question in the target dataset, use M to predict its answer.",
"4: For each question, assign the predicted answer to the question as the correct one.",
"5: Fine-tune M on the target dataset as usual.",
"6: until Reach the number of training epochs.",
"We used MovieQA (Tapaswi et al., 2016) as the source MCQA dataset, and TOEFL listening comprehension test (Tseng et al., 2016) and MCTest (Richardson et al., 2013) as two separate target datasets.",
"Examples of the three datasets are shown in Table 1.",
"MovieQA is a dataset that aims to evaluate automatic story comprehension from both video and text.",
"The dataset provides multiple sources of information such as plot synopses, scripts, subtitles, and video clips that can be used to infer answers.",
"We only used the plot synopses of the dataset, so our setting is the same as pure textual MCQA.",
"The dataset contains 9,848/1,958 train/dev examples; each question comes with a set of five possible answer choices with only one correct answer.",
"TOEFL listening comprehension test is a recently published, very challenging MCQA dataset that contains 717/124/122 train/dev/test examples.",
"It aims to test knowledge and skills of academic English for global English learners whose native languages are not English.",
"There are only four answer choices for each question.",
"The stories in this dataset are in audio form.",
"Each story comes with two transcripts: manual and ASR transcriptions, where the latter is obtained by running the CMU Sphinx recognizer (Walker et al., 2004) on the original audio files.",
"We use TOEFL-manual and TOEFL-ASR to denote the two versions, respectively.",
"We highlight that the questions in this dataset are not easy because most of the answers cannot be found by simply matching the question and the choices without understanding the story.",
"For example, there are questions regarding the gist of the story or the conclusion for the conversation.",
"MCTest is a collection of 660 elementary-level children's stories.",
"Each question comes with a set of four answer choices.",
"There are two variants in this dataset: MC160 and MC500.",
"The former contains 280/120/240 train/dev/test examples, while the latter contains 1,200/200/600 train/dev/test examples and is considered more difficult.",
"The two chosen target datasets are challenging because the stories and questions are complicated, and only small training sets are available.",
"Therefore, it is difficult to train statistical models on only their training sets because the small size limits the number of parameters in the models, and prevents learning any complex language concepts simultaneously with the capacity to answer questions.",
"We demonstrate that we can effectively overcome these difficulties via transfer learning in Section 5.",
"Among numerous models proposed for multiple-choice QA (Trischler et al., 2016; Fang et al., 2016; Tseng et al., 2016), we adopt the End-to-End Memory Network (MemN2N) 2 (Sukhbaatar et al., 2015) and Query-Based Attention CNN (QACNN) 3 (Liu et al., 2017), both open-sourced, to conduct the experiments.",
"Below we briefly introduce the two models in Section 4.1 and Section 4.2, respectively.",
"For the details of the models, please refer to the original papers.",
"An End-to-End Memory Network (MemN2N) first transforms Q into a vector representation with",
"an embedding layer B .",
"At the same time, all sentences in S are also transformed into two different sentence representations with two additional embedding layers A and C .",
"The first sentence representation is used in conjunction with the question representation to produce an attention-like mechanism that outputs the similarity between each sentence in S and Q .",
"The similarity is then used to weight the second sentence representation.",
"We then obtain the sum of the question representation and the weighted sentence representations over S as Q 0 .",
"In the original MemN2N, Q 0 is decoded to provide the estimation of the probability of being an answer for each word within a fixed set.",
"The word with the highest probability is then selected as the answer.",
"However, in multiple-choice QA, C is in the form of open, natural language sentences instead of a single word.",
"Hence we modify MemN2N by adding an embedding layer F to encode C as a vector representation C 0 by averaging the embeddings of words in C .",
"We then compute the similarity between each choice representation C 0 and Q 0 .",
"The choice C with the highest probability is then selected as the answer.",
"A Query-Based Attention CNN (QACNN) first uses an embedding layer E to transform S , Q , and C into a word embedding.",
"Then a compare layer generates a story-question similarity map SQ and a story-choice similarity map SC .",
"The two similarity maps are then passed into a two-stage CNN architecture, where a question-based attention mechanism on the basis of SQ is applied to each of the two stages.",
"The first stage CNN generates a word-level attention map for each sentence in S , which is then fed into the second stage CNN to generate a sentence-level attention map, and yield choice-answer features for each of the choices.",
"Finally, a classifier that consists of two fully-connected layers collects the information from every choice answer feature and outputs the most likely answer.",
"The trainable parameters are the embedding layer E that transforms S , Q , and C into word embeddings, the two-stage CNN W (1) CNN and W (2) CNN that integrate information from the word to the sentence level, and from the sentence to the story level, and the two fully-connected layers W (1) FC and W (2) FC that make the final prediction.",
"We mention the trainable parameters here because in Section 5 we will conduct experiments to analyze the transferability of the QACNN by fine-tuning some parameters while keeping others fixed.",
"Since QACNN is a newly proposed QA model has a relatively complex structure, we illustrate its architecture in Figure 1, which is enough for understanding the rest of the paper.",
"Please refer to the original paper (Liu et al., 2017) for more details.",
"For pre-training MemN2N and QACNN on MovieQA, we followed the exact same procedure as in Tapaswi et al. (2016) and Liu et al. (2017), respectively.",
"Each model was trained on the training set of the MovieQA task and tuned on the dev set, and the best performing models on the dev set were later fine-tuned on the target dataset.",
"During fine-tuning, the model was also trained on the training set of target datasets and tuned on the dev set, and the performance on the testing set of the target datasets was reported as the final result.",
"We use accuracy as the performance measurement.",
"Table 2 reports the results of our transfer learning on TOEFL-manual, TOEFL-ASR, MC160, and MC500, as well as the performance of the previous best models and several ablations that did not use pre-training or fine-tuning.",
"From Table 2, we have the following observations.",
"Transfer learning helps.",
"Rows",
"(a) and",
"(g) show the respective results when the QACNN and MemN2N are trained directly on the target datasets without pre-training on MovieQA.",
"Rows",
"(b) and",
"(h) show results when the models are trained only on the MovieQA data.",
"Rows",
"(c) and",
"(i) show results when the models are trained on both MovieQA and each of the four target datasets, and tested on the respective target dataset.",
"We observe that the results achieved in",
"(a),",
"(b),",
"(c),",
"(g),",
"(h), and",
"(i) are worse than their fine-tuned counterparts",
"(d),",
"(e),",
"(f), and",
"(j).",
"Through transfer learning, both QACNN and MemN2N perform better on all the target datasets.",
"For example, QACNN only achieves 57.5% accuracy on MC160 without pre-training on MovieQA, but the accuracy increases by 18.9% with pretraining (rows",
"(d) vs.",
"(a)).",
"In addition, with transfer learning, QACNN outperforms the previous best models on TOEFL-manual by 7%, TOEFL-ASR (Fang et al., 2016) by 6.5%, MC160 (Wang et al., 2015) by 1.1%, and MC500 (Trischler et al., 2016) by 1.3%, and becomes the state-of-the-art on all target datasets.",
"Which QACNN parameters to transfer?",
"For the QACNN, the training parameters are E, W (1) CNN , W (2) CNN , W (1) FC , and W (2) FC (Sec-tion 4.2).",
"To better understand how transfer learning affects the performance of QACNN, we 1589 also report the results of keeping some parameters fixed and only fine-tuning other parameters.",
"We choose to fine-tune either only the last fully-connected layer W (2) FC while keeping other parameters fixed (row",
"(d) in Table 2), the last two fully-connected layers W (1) FC and W (2) FC (row",
"(e)), and the entire QACNN (row",
"(f)).",
"For TOEFL-manual, TOEFL-ASR, and MC500, QACNN performs the best when only the last two fully-connected layers were fine-tuned; for MC160, it performs the best when only the last fully-connected layer was fine-tuned.",
"Note that for training the QACNN, we followed the same procedure as in Liu et al. (2017), whereby pre-trained GloVe word vectors (Pennington et al., 2014) were used to initialize the embedding layer, which were not updated during training.",
"Thus, the embedding layer does not depend on the training set, and the effective vocabularies are the same.",
"It is interesting to see that fine-tuning the entire QACNN doesn't necessarily produce the best result.",
"For MC500, the accuracy of QACNN drops by 4.6% compared to just fine-tuning the last two fully-connected layers (rows",
"(f) vs.",
"(e)).",
"We conjecture that this is due to the amount of training data of the target datasets when the training set of the target dataset is too small, fine-tuning all the parameters of a complex model like QACNN may result in overfitting.",
"This discovery aligns with other domains where transfer learning is well-studied such as object recognition (Yosin-ski et al., 2014).",
"examples is better than a small training set.",
"We expected to see that a MemN2N, when trained directly on the target dataset without pre-training on MovieQA, would outperform a MemN2N pre-trained on MovieQA without fine-tuning on the target dataset (rows",
"(g) vs.",
"(h)), since the model is evaluated on the target dataset.",
"However, for the QACNN this is surprisingly not the case QACNN pre-trained on MovieQA without fine-tuning on the target dataset outperforms QACNN trained directly on the target dataset without pre-training on MovieQA (rows",
"(b) vs.",
"(a)).",
"We attribute this to the limited size of the target dataset and the complex structure of the QACNN.",
"We conducted experiments to study the relationship between the amount of training data from the target dataset for fine-tuning the model and the performance.",
"We first pre-train the models on MovieQA, then vary the training data size of the target dataset used to fine-tune them.",
"Note that for QACNN, we only fine-tune the last two fully-connected layers instead of the entire model, since doing so usually produces the best performance according to Table 2.",
"The results are shown in Table 3 4 .",
"As expected, the more training data is used for fine-tuning, the better the model's performance is.",
"We also observe that the extent of improvement from using 0% to 25% of target training data is consistently larger than using from 25% to 50%, 50% to 75%, and 75% to 100%.",
"Using the QACNN fine-tuned on TOEFL-manual as an example, the accuracy of the QACNN improves by 2.7% when varying the training size from 0% to 25%, but only improves by 0.9%, 0.5%, and 0.7% when varying the training size from 25% to 50%, 50% to 75%, and 75% to 100%, respectively.",
"We also vary the size of MovieQA for pre-training to study how large the source dataset should be to make transfer learning feasible.",
"The results are shown in Table 4.",
"We find that even a small amount of source data can help.",
"For example, by using only 25% of MovieQA for pre-training, the accuracy increases 6.3% on MC160.",
"This is because 25% of MovieQA training set (2,462 examples) is still much larger than the MC160 training set (280 examples).",
"As the size of the source dataset increases, the performance of QACNN continues to improve.",
"4 We only include the results of QACNN in Table 3, but the results of MemN2N are very similar to QACNN.",
"We are interested in understanding what types of questions benefit the most from transfer learning.",
"According to the official guide to the TOEFL test, the questions in TOEFL can be divided into 3 types.",
"Type 1 questions are for basic comprehension of the story.",
"Type 2 questions go beyond basic comprehension, but test the understanding of the functions of utterances or the attitude the speaker expresses.",
"Type 3 questions further require the ability of making connections between different parts of the story, making inferences, drawing conclusions, or forming generalizations.",
"We used the split provided by Fang et al. (2016), which contains 70/18/34 Type 1/2/3 questions.",
"We compare the performance of the QACNN and MemN2N on different types of questions in TOEFL-manual with and without pre-training on MovieQA, and show the results in Figure 2.",
"From Figure 2 we can observe that for both the QACNN and MemN2N, their performance on all three types of questions improves after pre-training, showing that the effectiveness of transfer learning is not limited to specific types of questions.",
"So far, we have studied the property of supervised transfer learning for QA, which means 1591 Figure 4: Visualization of the changes of the word-level attention map in the first stage CNN of QACNN in different training epochs.",
"that during pre-training and fine-tuning, both the source and target datasets provide the correct answer for each question.",
"We now conduct unsupervised transfer learning experiments described in Section 2.2 (Algorithm 1), where the answers to the questions in the target dataset are not available.",
"We used QACNN as the QA model and all the parameters ( E, W (1) CNN , W (2) CNN , W (1) FC , and W (2) FC ) were updated during fine-tuning in this experiment.",
"Since the range of the testing accuracy of the TOEFL-series (TOEFL-manual and TOEFL-ASR) is different from that of MCTest (MC160 and MC500), their results are displayed separately in Figure",
"3(a) and Figure",
"3(b), respectively.",
"From Figure",
"3(a) and Figure",
"3(b) we can observe that without ground truth in the target dataset for supervised fine-tuning, transfer learning from a source dataset can still improve the performance through a simple iterative self-labeling mechanism.",
"For TOEFL-manual and TOEFL-ASR, QACNN achieves the highest testing accuracy at Epoch 7 and 8, outperforming its counterpart without fine-tuning by approximately 4% and 5%, respectively.",
"For MC160 and MC500, the QACNN achieves the peak at Epoch 3 and 6, outperforming its counterpart without fine-tuning by about 2% and 6%, respectively.",
"The results also show that the performance of unsupervised transfer learning is still worse than supervised transfer learning, which is not surprising, but the effectiveness of unsupervised transfer learning when no ground truth labels are provided is validated.",
"To better understand the unsupervised transfer learning process of QACNN, we visualize the changes of the word-level attention map during training Epoch 1, 4, 7, and 10 in Figure 4.",
"We use the same question from TOEFL-manual as shown in Table 1 as an example.",
"From Figure 4 we can observe that as the training epochs increase, the QACNN focuses more on the context in the story that is related to the question and the correct answer choice.",
"For example, the correct answer is related to class project.",
"In Epoch 1 and 4, the model does not focus on the phrase class repre-sentation, but the model attends on the phrase in Epoch 7 and 10.",
"This demonstrates that even without ground truth, the iterative process in Algorithm 1 is still able to lead the QA model to gradually focus more on the important part of the story for answering the question.",
"In this paper we demonstrate that a simple transfer learning technique can be very useful for the task of multi-choice question answering.",
"We use a QACNN and a MemN2N as QA models, with MovieQA as the source task and a TOEFL listening comprehension test and MCTest as the target tasks.",
"By pre-training on MovieQA, the performance of both models on the target datasets improves significantly.",
"The models also require much less training data from the target dataset to achieve similar performance to those without pretraining.",
"We also conduct experiments to study the influence of transfer learning on different types 1592 of questions, and show that the effectiveness of transfer learning is not limited to specific types of questions.",
"Finally, we show that by a simple iterative self-labeling technique, transfer learning is still useful, even when the correct answers for target QA dataset examples are not available, through quantitative results and visual analysis.",
"One area of future research will be generalizing the transfer learning results presented in this paper to other QA models and datasets.",
"In addition, since the original data format of the TOEFL listening comprehension test is audio instead of text, it is worth trying to initialize the embedding layer of the QACNN with semantic or acoustic word embeddings learned directly from speech (Chung and Glass, 2018, 2017; Chung et al., 2016) instead of those learned from text (Mikolov et al., 2013; Pennington et al., 2014)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"result",
"result",
"result",
"other"
] |
[
"In this work, we provide a systematic and comprehensive empirical comparison of pretrained multilingual language models versus their monolingual counterparts with regard to their monolingual task performance.",
"We study a set of nine typologically diverse languages with readily available pretrained monolingual models on a set of five diverse monolingual downstream tasks.",
"We first aim to establish, via fair and controlled comparisons, if a gap between the multilingual and the corresponding monolingual representation model of that language exists, and subsequently investigate the reason for any performance difference.",
"To disentangle conflating factors, we train new monolingual models on the same data, with monolingually and multilingually trained tokenizers.",
"We find that while the pretraining data size is an important factor, a designated monolingual tokenizer plays an equally important role in the downstream performance.",
"Our results show that languages that are adequately represented in the multilingual model's vocabulary exhibit negligible performance decreases over their monolingual counterparts.",
"We further find that replacing the original multilingual tokenizer with the specialized monolingual tokenizer improves the downstream performance of the multilingual model for almost every task and language.",
"Following large transformer-based language models (LMs, Vaswani et al., 2017) pretrained on large English corpora (e.g., BERT, RoBERTa, T5; Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020), similar monolingual language models have been introduced for other languages (Virtanen et al., 2019;",
"Our code is available at https://github.com/Adapter-Hub/hgiyt.",
"Antoun et al., 2020; Martin et al., 2020, inter alia ), offering previously unmatched performance in all NLP tasks.",
"Concurrently, massively multilingual models with the same architectures and training procedures, covering more than 100 languages, have been proposed (e.g., mBERT, XLM-R, mT5; Devlin et al., 2019; Conneau et al., 2020; Xue et al., 2021).",
"The industry of pretraining and releasing new monolingual BERT models continues its operations despite the fact that the corresponding languages are already covered by multilingual models.",
"The common argument justifying the need for monolingual variants is the assumption that multilingual modelsdue to suffering from the so-called curse of multilinguality (Conneau et al., 2020, i.e., the lack of capacity to represent all languages in an equitable way)underperform monolingual models when applied to monolingual tasks (Virtanen et al., 2019; Antoun et al., 2020; Ronnqvist et al., 2019, inter alia ).",
"However, little to no compelling empirical evidence with rigorous experiments and fair comparisons have been presented so far to support or invalidate this strong claim.",
"In this regard, much of the work proposing and releasing new monolingual models is grounded in anecdotal evidence, pointing to the positive results reported for other monolingual BERT models (de Vries et al., 2019; Virtanen et al., 2019; Antoun et al., 2020).",
"Monolingual BERT models are typically evaluated on downstream NLP tasks to demonstrate their effectiveness in comparison to previous monolingual models or mBERT (Virtanen et al., 2019; Antoun et al., 2020; Martin et al., 2020, inter alia ).",
"While these results do show that certain monolingual models can outperform mBERT in certain tasks, we hypothesize that this may substantially vary across different languages and language properties, tasks, pretrained models and their pretraining data, domain, and size.",
"We further argue that conclusive evidence, either supporting or refuting the key hypothesis that monolingual models currently outperform multilingual models, necessitates an independent and controlled empirical comparison on a diverse set of languages and tasks.",
"While recent work has argued and validated that mBERT is under-trained (Ronnqvist et al., 2019; Wu and Dredze, 2020), providing evidence of improved performance when training monolingual models on more data, it is unclear if this is the only factor relevant for the performance of monolingual models.",
"Another so far under-studied factor is the limited vocabulary size of multilingual models compared to the sum of tokens of all corresponding monolingual models.",
"Our analyses investigating dedicated (i.e., language-specific) tokenizers reveal the importance of high-quality tokenizers for the performance of both model variants.",
"We also shed light on the interplay of tokenization with other factors such as pretraining data size.",
"Contributions.",
"1) We systematically compare monolingual with multilingual pretrained language models for 9 typologically diverse languages on 5 structurally different tasks.",
"2) We train new monolingual models on equally sized datasets with different tokenizers (i.e., shared multilingual versus dedicated language-specific tokenizers) to disentangle the impact of pretraining data size from the vocabulary of the tokenizer.",
"3) We isolate factors that contribute to a performance difference (e.g., tokenizers' fertility, the number of unseen (sub)words, data size) and provide an in-depth analysis of the impact of these factors on task performance.",
"4) Our results suggest that monolingually adapted tokenizers can robustly improve monolingual performance of multilingual models.",
"Multilingual LMs.",
"The widespread usage of pretrained multilingual Transformer-based LMs has been instigated by the release of multilingual BERT (Devlin et al., 2019), which followed on the success of the monolingual English BERT model.",
"mBERT adopted the same pretraining regime as monolingual BERT by concatenating the 104 largest Wikipedias.",
"Exponential smoothing was used when creating the subword vocabulary based on Word-Pieces (Wu et al., 2016) and a pretraining corpus.",
"By oversampling underrepresented languages and undersampling overrepresented ones, it aims to counteract the imbalance of pretraining data sizes.",
"The final shared mBERT vocabulary comprises a total of 119,547 subword tokens.",
"Other multilingual models followed mBERT, such as XLM-R (Conneau et al., 2020).",
"Concurrently, many studies analyzed mBERT's and XLM-R's capabilities and limitations, finding that the multilingual models work surprisingly well for cross-lingual tasks, despite the fact that they do not rely on direct cross-lingual supervision (e.g., parallel or comparable data, translation dictionaries; Pires et al., 2019; Wu and Dredze, 2019; Artetxe et al., 2020; Hu et al., 2020; K et al., 2020).",
"However, recent work has also pointed to some fundamental limitations of multilingual LMs.",
"Conneau et al. (2020) observe that, for a fixed model capacity, adding new languages increases crosslingual performance up to a certain point, after which adding more languages results in performance drops.",
"This phenomenon, termed the curse of multilinguality , can be attenuated by increasing the model capacity (Artetxe et al., 2020; Pfeiffer et al., 2020b; Chau et al., 2020) or through additional training for particular language pairs (Pfeiffer et al., 2020b; Ponti et al., 2020).",
"Another observation concerns substantially reduced crosslingual and monolingual abilities of the models for resource-poor languages with smaller pretraining data (Wu and Dredze, 2020; Hu et al., 2020; Lauscher et al., 2020).",
"Those languages remain underrepresented in the subword vocabulary and the model's shared representation space despite oversampling.",
"Despite recent efforts to mitigate this issue (e.g., Chung et al. (2020) propose to cluster and merge the vocabularies of similar languages, before defining a joint vocabulary across all lan-guages), the multilingual LMs still struggle with balancing their parameters across many languages.",
"Monolingual versus Multilingual LMs.",
"New monolingual language-specific models also emerged for many languages, following BERT's architecture and pretraining procedure.",
"There are monolingual BERT variants for Arabic (Antoun et al., 2020), French (Martin et al., 2020), Finnish (Virtanen et al., 2019), Dutch (de Vries et al., 2019), to name only a few.",
"Pyysalo et al. (2020) released 44 monolingual WikiBERT models trained on Wikipedia.",
"However, only a few studies have thus far, either explicitly or implicitly, attempted to understand how monolingual and multilingual LMs compare across languages.",
"Nozza et al. (2020) extracted task results from the respective papers on monolingual BERTs to facilitate an overview of monolingual models and their comparison to mBERT.",
"1 However, they have not verified the scores, nor have they performed a controlled impartial comparison.",
"Vulic et al. (2020) probed mBERT and monolingual BERT models across six typologically diverse languages for lexical semantics.",
"They show that pretrained monolingual BERT models encode significantly more lexical information than mBERT.",
"Zhang et al. (2020) investigated the role of pretraining data size with RoBERTa, finding that the model learns most syntactic and semantic features on corpora spanning 10M100M word tokens, but still requires massive datasets to learn higher-level semantic and commonsense knowledge.",
"Mulcaire et al. (2019) compared monolingual and bilingual ELMo (Peters et al., 2018) LMs across three downstream tasks, finding that contextualized representations from the bilingual models can improve monolingual task performance relative to their monolingual counterparts.",
"2 However, it is unclear how their findings extend to massively multilingual LMs potentially suffering from the curse of multilinguality.",
"Ronnqvist et al. (2019) compared mBERT to monolingual BERT models for six languages (German, English, Swedish, Danish, Norwegian, Finnish) on three different tasks.",
"They find that mBERT lags behind its monolingual counterparts in terms of performance on cloze and generation tasks.",
"They also identified clear differences among the six languages in terms of this performance gap.",
"They speculate that mBERT is under-trained with respect to individual languages.",
"However, their set of tasks is limited, and their language sample is typologically narrow; it remains unclear whether these findings extend to different language families and to structurally different tasks.",
"Despite recent efforts, a careful, systematic study within a controlled experimental setup, a diverse language sample and set of tasks is still lacking.",
"We aim to address this gap in this work.",
"We compare multilingual BERT with its monolingual counterparts in a spectrum of typologically 1",
"https://bertlang.unibocconi.it/ 2 Mulcaire et al. (2019) clearly differentiate between multilingual and polyglot models.",
"Their definition of polyglot models is in line with what we term multilingual models.",
"diverse languages and across a variety of downstream tasks.",
"By isolating and analyzing crucial factors contributing to downstream performance, such as tokenizers and pretraining data, we can conduct unbiased and fair comparisons.",
"Our selection of languages has been guided by several (sometimes competing) criteria: C1) typological diversity; C2) availability of pretrained monolingual BERT models; C3) representation of the languages in standard evaluation benchmarks for a sufficient number of tasks.",
"Regarding C1, most high-resource languages belong to the same language families, thus sharing a majority of their linguistic features.",
"Neglecting typological diversity inevitably leads to poor generalizability and language-specific biases (Gerz et al., 2018; Ponti et al., 2019; Joshi et al., 2020).",
"Following recent work in multilingual NLP that pays particular attention to typological diversity (Clark et al., 2020; Hu et al., 2020; Ponti et al., 2020, inter alia ), we experiment with a language sample covering a broad spectrum of language properties.",
"Regarding C2, for computational tractability, we only select languages with readily available BERT models.",
"Unlike prior work, which typically lacks either language (Ronnqvist et al., 2019; Zhang et al., 2020) or task diversity (Wu and Dredze, 2020; Vulic et al., 2020), we ensure that our experimental framework takes both into account, thus also satisfying C3.",
"We achieve task diversity and generalizability by selecting a combination of tasks driven by lower-level syntactic and higher-level semantic features (Lauscher et al., 2020).",
"Finally, we select a set of 9 languages from 8 language families, as listed in Table 1. 3 We evaluate mBERT and monolingual BERT models on five downstream NLP tasks: named entity recognition (NER), sentiment analysis (SA), question answering (QA), universal dependency parsing (UDP), and part-of-speech tagging (POS).",
"4 3 Note that, since we evaluate monolingual performance and not cross-lingual transfer performance, we require training data in the target language.",
"Therefore, we are unable to leverage many of the available multilingual evaluation data such as XQuAD (Artetxe et al., 2020), MLQA (Lewis et al., 2020), or XNLI (Conneau et al., 2018).",
"These evaluation sets do not provide any training portions for languages other than English.",
"Additional information regarding our selection of pretrained models is available in Appendix A.1.",
"4 Information on which datasets are associated with which language and the dataset sizes (examples per split) are provided in Appendix A.4.",
"Named Entity Recognition (NER).",
"We rely on: CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003), FiNER (Ruokolainen et al., 2020), Chinese Literature (Xu et al., 2017), KMOU NER, 6 WikiAnn (Pan et al., 2017; Rahimi et al., 2019).",
"Sentiment Analysis (SA).",
"We employ: HARD (Elnagar et al., 2018), IMDb Movie Reviews (Maas et al., 2011), Indonesian Prosa (Purwari-anti and Crisdayanti, 2019), Yahoo Movie Reviews, 7 NSMC, 8 RuReviews (Smetanin and Ko-marov, 2019), Turkish Movie and Product Reviews (Demirtas and Pechenizkiy, 2013), ChnSentiCorp.",
"9 Question Answering (QA).",
"We use: SQuADv1.1 (Rajpurkar et al., 2016), KorQuAD 1.0 (Lim et al., 2019), SberQuAD (Efimov et al., 2020), TQuAD, 10 DRCD (Shao et al., 2019), TyDiQA-GoldP (Clark et al., 2020).",
"Dependency Parsing (UDP).",
"We rely on Universal Dependencies (Nivre et al., 2016, 2020) v2.6 (Zeman et al., 2020) for all languages.",
"Fine-Tuning Setup.",
"For all tasks besides UDP, we use the standard fine-tuning setup of Devlin et al. (2019).",
"For UDP, we use a transformer-based variant (Glavas and Vulic, 2021) of the standard deep biaffine attention dependency parser (Dozat and Manning, 2017).",
"We distinguish between fully fine-tuning a monolingual BERT model and fully fine-tuning mBERT on the task.",
"For both settings, we average scores over three random initializations on the development set.",
"On the test set, we report 5 https://github.com/cl-tohoku/bert-japanese 6 https://github.com/kmounlp/NER 7 https://github.com/dennybritz/sentiment-analysis 8 https://www.lucypark.kr/docs/2015-pyconkr/#39 9 https://github.com/pengming617/bert classification 10 https://tquad.github.io/turkish-nlp-qa-dataset/ Lg Model NER SA QA UDP POS Test Test Dev Test Test F 1 Acc EM / F 1 UAS / LAS Acc AR Monolingual 91.1 95.9 68.3 / 82.4 90.1 / 85.6 96.8 mBERT 90.0 95.4 66.1 / 80.6 88.8 / 83.8 96.8 EN Monolingual 91.5 91.6 80.5 / 88.0 92.1 / 89.7 97.0 mBERT 91.2 89.8 80.9 / 88.4 91.6 / 89.1 96.9 FI Monolingual 92.0 69.9 / 81.6 95.9 / 94.4 98.4 mBERT 88.2 66.6 / 77.6 91.9 / 88.7 96.2 ID Monolingual 91.0 96.0 66.8 / 78.1 85.3 / 78.1 92.1 mBERT 93.5 91.4 71.2 / 82.1 85.9 / 79.3 93.5 JA Monolingual 72.4 88.0 / 94.7 / 93.0 98.1 mBERT 73.4 87.8 / 94.0 / 92.3 97.8 KO Monolingual 88.8 89.7 74.2 / 91.1 90.3 / 87.2 97.0 mBERT 86.6 86.7 69.7 / 89.5 89.2 / 85.7 96.0 RU Monolingual 91.0 95.2 64.3 / 83.7 93.1 / 89.9 98.4 mBERT 90.0 95.0 63.3 / 82.6 91.9 / 88.5 98.2 TR Monolingual 92.8 88.8 60.6 / 78.1 79.8 / 73.2 96.9 mBERT 93.8 86.4 57.9 / 76.4 74.5 / 67.4 95.7 ZH Monolingual 76.5 95.3 82.3 / 89.3 88.6 / 85.6 97.2 mBERT 76.1 93.8 82.0 / 89.3 88.1 / 85.0 96.7 AVG Monolingual 87.4 92.4 70.8 / 84.0 90.0 / 86.3 96.9 mBERT 87.0 91.0 69.7 / 83.3 88.4 / 84.4 96.4 Table 2: Performance on Named Entity Recognition (NER), Sentiment Analysis (SA), Question Answering (QA), Universal Dependency Parsing (UDP), and Part-of-Speech Tagging (POS).",
"the results of the initialization that achieved the highest score on the development set.",
"Evaluation Measures.",
"We report F 1 scores for NER, accuracy scores for SA and POS, unlabeled and labeled attachment scores (UAS & LAS) for UDP, and exact match and F 1 scores for QA.",
"Hyper-Parameters and Technical Details.",
"We use AdamW (Kingma and Ba, 2015) in all experiments, with a learning rate of 3 e 5 .",
"11 We train for 10 epochs with early stopping (Prechelt, 1998).",
"12 11 Preliminary experiments indicated this to be a well performing learning rate.",
"Due to the large volume of our experiments, we were unable to tune all the hyper-parameters for each setting.",
"We found that a higher learning rate of 5 e 4 works best for adapter-based fine-tuning (see later) since the task adapter parameters are learned from scratch (i.e., they are randomly initialized).",
"12 We evaluate a model every 500 gradient steps on the development set, saving the best-performing model based on the respective evaluation measures.",
"We terminate training if no performance gains are observed within five consecutive evaluation runs ( = 2,500 steps).",
"For QA and UDP, we use the F 1 scores and LAS, respectively.",
"For FI and ID QA, we train for 20 epochs due to slower convergence.",
"We train with batch size 32 and max sequence length 256 for all tasks except QA.",
"In QA, the batch size is 24, max sequence length 384, query length 64, and document stride is set to 128.",
"We report our first set of results in Table 2. 13 We find that the performance gap between monolingual models and mBERT does exist to a large extent, confirming anecdotal evidence from prior work.",
"However, we also notice that the score differences are largely dependent on the language and task at hand.",
"The largest performance gains of monolingual models over mBERT are found for FI , TR , KO , and AR .",
"In contrast, mBERT outperforms the IndoBERT ( ID ) model in all tasks except SA, and performs competitively with the JA and ZH monolingual models on most datasets.",
"In general, the gap is particularly narrow for POS tagging, where all models tend to score high (in most cases north of 95% accuracy).",
"ID aside, we also see a clear trend for UDP, with monolingual models outperforming fully fine-tuned mBERT models, most notably for FI and TR .",
"In what follows, we seek to understand the causes of this behavior in relation to different factors such as tokenizers, corpora sizes, as well as languages and tasks in consideration.",
"The size of the pretraining corpora plays an important role in the performance of transformers (Liu et al., 2019; Conneau et al., 2020; Zhang et al., 2020, inter alia ).",
"Therefore, we compare how much data each monolingual model was trained on with the amount of data in the respective language that mBERT has seen during training.",
"Given that mBERT was trained on entire Wikipedia dumps, we estimate the latter by the total number of words across all articles listed for each Wiki.",
"14 For the monolingual LMs, we extract information on pretraining data from the model documentation.",
"If no exact numbers are explicitly stated, and the pretraining corpora are unavailable, we make estimations based on the information provided by the authors.",
"15 The statistics are provided in Figure 1a.",
"For EN , JA , RU , and ZH , both the respective monolingual BERT and mBERT were trained on similar amounts of monolingual data.",
"On the other hand, monolingual BERTs of AR , ID , FI , KO , and TR were trained on about twice ( KO ) up to more than 40 times ( TR ) as much data in their language than mBERT.",
"13 See Appendix Table 8 for the results on development sets.",
"14 Based on the numbers from",
"https://meta.m.wikimedia.org/wiki/List of Wikipedias 15 We provide further details in Appendix A.2.",
"Compared to monolingual models, mBERT is substantially more limited in terms of the parameter budget that it can allocate for each of its 104 languages in its vocabulary.",
"In addition, monolingual tokenizers are typically trained by native-speaking experts who are aware of relevant linguistic phenomena exhibited by their target language.",
"We thus inspect how this affects the tokenizations of monolingual data produced by our sample of monolingual models and mBERT.",
"We tokenize examples from Universal Dependencies v2.6 treebanks and compute two metrics ( Acs, 2019).",
"16 First, the subword fertility measures the average number of sub-words produced per tokenized word.",
"A minimum fertility of 1 means that the tokenizer's vocabulary contains every single word in the text.",
"We plot the fertility scores in Figure 1b.",
"We find that mBERT has similar fertility values as its monolingual counterparts for EN , ID , JA , and ZH .",
"In contrast, mBERT has a much higher fertility for AR , FI , KO , RU , and TR , indicating that such languages are over-segmented.",
"mBERT's fertility is the lowest for EN ; this is due to mBERT having seen the most data in this language during training, as well as English being morphologically poor in contrast to languages such as AR , FI , RU , or TR .",
"17 The second metric we employ is the proportion of words where the tokenized word is continued across at least two sub-tokens (denoted by continuation symbols ##).",
"Whereas the fertility is concerned with how aggressively a tokenizer splits, this metric measures how often it splits words.",
"Intuitively, low scores are preferable for both metrics as they indicate that the tokenizer is well suited to the language.",
"The plots in Figure 1c show similar trends as with the fertility statistic.",
"In addition to AR , FI , KO , RU , and TR , which already displayed differences in fertility, mBERT also produces a proportion of continued words more than twice as high as the monolingual model for ID .",
"18 16 We provide further details in Appendix A.3.",
"17 The JA model is the only monolingual BERT with a fertility score higher than mBERT; its tokenizer is character-based and thus by design produces the maximum number of subwords.",
"18 We discuss additional tokenization statistics, further highlighting the differences (or lack thereof) between the individual monolingual tokenizers and the mBERT tokenizer, in Appendix B.1.",
"The differences in pretraining corpora and tokenizer statistics seem to align with the variations in downstream performance across languages.",
"In particular, it appears that the performance gains of monolingual models over mBERT are larger for languages where the differences between the respective tokenizers and pretraining corpora sizes are also larger ( AR , FI , KO , RU , TR vs. EN , JA , ZH ).",
"19 This implies that both the data size and the tokenizer are among the main driving forces of downstream task performance.",
"To disentangle the effects of these two factors, we pretrain new models for AR , FI , ID , KO , and TR (the languages that exhibited the largest discrepancies in tokenization and pretraining data size) on Wikipedia data.",
"We train four model variants for each language.",
"First, we train two new monolingual BERT models on the same data, one with the original monolingual tokenizer ( MONOMODEL-MONOTOK ) and one with the mBERT tokenizer ( MONOMODEL-MBERTTOK ).",
"20 Second, similar to Artetxe et al. (2020), we retrain the embedding layer of mBERT, once with the respective monolingual tokenizer ( MBERTMODELMONOTOK ) and once with the mBERT tokenizer ( MBERTMODEL-MBERTTOK ).",
"We freeze the transformer and only retrain the embedding weights, thus largely preserving mBERT's multilinguality.",
"The reason we retrain mBERT's embedding layer with its own tokenizer is to further eliminate confounding factors when comparing to the version of mBERT with monolingually retrained embeddings.",
"By comparing models 19 The only exception is ID , where the monolingual model has seen significantly more data and also scores lower on the tokenizer metrics, yet underperforms mBERT in most tasks.",
"We suspect this exception is because IndoBERT is uncased, whereas the remaining models are cased.",
"20 The only exception is ID ; instead of relying on the uncased IndoBERT tokenizer by Wilie et al. (2020), we introduce a new cased tokenizer with identical vocabulary size (30,521).",
"trained on the same amount of data, but with different tokenizers ( MONOMODEL-MONOTOK vs. MONOMODEL-MBERTTOK , MBERTMODEL-MBERTTOK vs. MBERTMODEL-MONOTOK ), we disentangle the effect of the dataset size from the tokenizer, both with monolingual and multilingual LM variants.",
"Pretraining Setup.",
"We pretrain new BERT models for each language on its respective Wikipedia dump.",
"21 We apply two preprocessing steps to obtain clean data for pretraining.",
"First, we use WikiExtractor (Attardi, 2015) to extract text passages from the raw dumps.",
"Next, we follow Pyysalo et al. (2020) and utilize UDPipe (Straka et al., 2016) parsers pretrained on UD data to segment the extracted text passages into texts with document, sentence, and word boundaries.",
"Following Liu et al. (2019); Wu and Dredze (2020), we only use the masked language modeling (MLM) objective and omit the next sentence prediction task.",
"Besides that, we largely follow the default pretraining procedure by Devlin et al. (2019).",
"We pretrain the new monolingual LMs ( MONOMODEL -* ) from scratch for 1M steps.",
"22 We enable whole word masking (Devlin et al., 2019) for the FI monolingual models, following the pretraining procedure for FinBERT (Virta-nen et al., 2019).",
"For the retrained mBERT models ( MBERTMODEL -* ), we train for 250,000 steps following Artetxe et al. (2020).",
"23 We freeze all parameters outside the embedding layer.",
"24 Results.",
"We perform the same evaluations on downstream tasks for our new models as described 21 We use Wiki dumps from June 20, 2020 (e.g., fiwiki-20200720-pages-articles.xml.bz2 for FI ).",
"22 The batch size is 64; the sequence length is 128 for the first 900,000 steps, and 512 for the remaining 100,000 steps.",
"23 We train with batch size 64 and sequence length 512, otherwise using the same hyper-parameters as for the monolingual models.",
"24 For more details see Appendix A.5.",
"The results indicate that the models trained with dedicated monolingual tokenizers outperform their counterparts with multilingual tokenizers in most tasks, with particular consistency for QA, UDP, and SA.",
"In NER, the models trained with multilingual tokenizers score competitively or higher than the monolingual ones in half of the cases.",
"Overall, the performance gap is the smallest for POS tagging (at most 0.4% accuracy).",
"We observe the 25 Full results including development set scores are available in Table 9 of the Appendix.",
"largest gaps for QA (6.1 EM / 4.4 F 1 in ID ), SA (2.2% accuracy in TR ), and NER (1.7 F 1 in AR ).",
"Although the only language in which the monolingual counterpart always comes out on top is KO , the multilingual counterpart comes out on top at most 3/10 times (for AR and TR ) in the other languages.",
"The largest decrease in performance of a monolingual tokenizer relative to its multilingual counterpart is found for SA in TR (0.8% accuracy).",
"Overall, we find that for 38 out of 48 task, model, and language combinations, the monolingual tokenizer outperforms the mBERT counterpart.",
"We were able to improve the monolingual performance of the original mBERT for 20 out of 24 languages and tasks by only replacing the tokenizer and, thus, leveraging a specialized monolingual version.",
"Similar to how the chosen method of tokenization affects neural machine translation quality (Domingo et al., 2019), these results establish that, in fact, the designated pretrained tokenizer plays a fundamental role in the monolingual downstream task performance of contemporary LMs.",
"In 18/24 language and task settings, the monolingual model from prior work (trained on more data) outperforms its corresponding MONOMODELMONOTOK model.",
"4/6 settings in which our MONOMODEL-MONOTOK model performs better are found for ID , where IndoBERT uses an uncased tokenizer and our model uses a cased one, which may affect the comparison.",
"Expectedly, these results strongly indicate that data size plays a major role in downstream performance and corroborate prior research findings (Liu et al., 2019; Conneau et al., 2020; Zhang et al., 2020, inter alia ).",
"Another way to provide more language-specific capacity to a multilingual LM beyond a dedicated tokenizer, thereby potentially making gains in monolingual downstream performance, is to introduce adapters (Pfeiffer et al., 2020b,c; Ustun et al., 2020), a small number of additional parameters at every layer of a pretrained model.",
"To train adapters, usually all pretrained weights are frozen, while only the adapter weights are fine-tuned.",
"26 The adapter-based approaches thus offer increased efficiency and modularity; it is crucial to verify to which extent our findings extend to the more efficient and 26 Pfeiffer et al. (2020b) propose to stack task-specific adapters on top of language adapters and extend this approach in Pfeiffer et al. (2020c) by additionally training new embeddings for the target language.",
"more versatile adapter-based fine-tuning setup.",
"We evaluate the impact of different adapter components on the downstream task performance and their complementarity with monolingual tokenizers in Table 4. 27 Here, + A Task and + A Lang implies adding taskand language-adapters respectively, whereas + MONOTOK additionally includes a new embedding layer.",
"As mentioned, we only fine-tune adapter weights on the downstream task, leveraging the adapter architecture proposed by Pfeiffer et al. (2021).",
"For the + A Task + A Lang setting we leverage pretrained language adapter weights available at AdapterHub.ml (Pfeiffer et al., 2020a).",
"Language adapters are added to the model and frozen while only task adapters are trained on the target task.",
"For the + A Task + A Lang + MONOTOK we train language adapters and new embeddings with the corresponding monolingual tokenizer equally as described in the previous section (e.g. MBERTMODELMONOTOK ), task adapters are trained with a learning rate of 5 e 4 and 30 epochs with early stopping.",
"Results.",
"Similar to previous findings, adapters improve upon mBERT in 18/24 language, and task settings, 13 of which can be attributed to the improved MBERTMODEL-MONOTOK tokenizer.",
"Figure 2 illustrates the average performance of the different adapter components in comparison to the monolingual models.",
"We find that adapters with dedicated tokenizers reduce the performance gap con-27 See Appendix Table 10 for the results on dev sets.",
"siderably without leveraging more training data, and even outperform the monolingual models in QA.",
"This finding shows that adding additional language-specific capacity to existing multilingual LMs, which can be achieved with adapters in a portable and efficient way, is a viable alternative to monolingual pretraining.",
"At first glance, our results displayed in Table 2 seem to confirm the prevailing view that monolingual models are more effective than multilingual models (Ronnqvist et al., 2019; Antoun et al., 2020; de Vries et al., 2019, inter alia ).",
"However, the broad scope of our experiments reveals certain nuances that were previously undiscovered.",
"Unlike prior work, which primarily attributes gaps in performance to mBERT being under-trained (Ronnqvist et al., 2019; Wu and Dredze, 2020), our disentangled results (Table 3) suggest that a large portion of existing performance gaps can be attributed to the capability of the tokenizer.",
"With monolingual tokenizers with lower fertility and proportion-of-continued-words values than the mBERT tokenizer (such as for AR , FI , ID , KO , TR ), consistent gains can be achieved, irrespective of whether the LMs are monolingual (the MONOMODEL -* comparison) or multilingual (a comparison of MBERTMODEL -* variants).",
"Whenever the differences between monolingual models and mBERT with respect to the tokenizer properties and the pretraining corpus size are small (e.g., for EN , JA , and ZH ), the performance gap is typically negligible.",
"In QA, we even find mBERT to be favorable for these languages.",
"Therefore, we conclude that monolingual models are not superior to multilingual ones per se, but gain advantage in direct comparisons by incorporating more pretraining data and using language-adapted tokenizers.",
"Correlation Analysis.",
"To uncover additional patterns in our results (Tables 2, 3, 4), we perform a statistical analysis assessing the correlation between the individual factors (pretraining data size, subword fertility, proportion of continued words) and the downstream performance.",
"Although our framework may not provide enough data points to be statistically representative, we argue that the correlation coefficient can still provide reasonable indications and reveal relations not immediately evident by looking at the tables.",
"Figure 3 shows that both decreases in the proportion of continued words and the fertility correlate with an increase in downstream performance relative to fully fine-tuned mBERT across all tasks.",
"The correlation is stronger for UDP and QA, where we find models with monolingual tokenizers to outperform their counterparts with the mBERT tokenizer consistently.",
"The correlation is weaker for NER and POS tagging, which is also expected, considering the inconsistency of the results.",
"28 Overall, we find that the fertility and the proportion of continued words have a similar effect on the monolingual downstream performance as the corpus size for pretraining; This indicates that the tokenizer's ability of representing a language plays a crucial role; Consequently, choosing a suboptimal tokenizer typically results in deteriorated downstream performance.",
"We have conducted the first comprehensive empirical investigation concerning the monolingual performance of monolingual and multilingual language models (LMs).",
"While our results support the existence of a performance gap in most but not all languages and tasks, further analyses revealed that the gaps are often substantially smaller than what was previously assumed.",
"The gaps exist in certain languages due to the discrepancies in 1) pretraining data size, and 2) chosen tokenizers, and the level of their adaptation to the target language.",
"Further, we have disentangled the impact of pretrained corpora size from the influence of the tokenizers on the downstream task performance.",
"We have trained new monolingual LMs on the same data, but with two different tokenizers; one being the dedicated tokenizer of the monolingual LM provided by native speakers; the other being the automatically generated multilingual mBERT tokenizer.",
"We have found that for (almost) every task and language, the use of monolingual tokenizers outperforms the mBERT tokenizer.",
"Consequently, in line with recent work by Chung et al. (2020), our results suggest that investing more effort into 1) improving the balance of individual languages' representations in the vocabulary of multilingual LMs, and 2) providing language-specific adaptations and extensions of multilingual tokenizers (Pfeiffer et al., 2020c) can reduce the gap between monolingual and multilingual LMs.",
"Another promising future research direction is completely disposing of any (language-specific or multilingual) tokenizers during pretraining (Clark et al., 2021).",
"Our code, pretrained models, and adapters are available at https://github.com/Adapter-Hub/hgiyt.",
"Jonas Pfeiffer is supported by the LOEWE initiative (Hesse, Germany) within the emergenCITY center.",
"The work of Ivan Vulic is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909).",
"We thank Nils Reimers, Prasetya Ajie Utama, and Adhiguna Kuncoro for insightful feedback and suggestions on a draft of this paper."
] | [
"method",
"method",
"objective",
"objective",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Machine reading comprehension is a heavily-studied research and test field for evaluating new pre-trained language models (PrLMs) and fine-tuning strategies, and recent studies have enriched the pre-trained language models with syntactic, semantic and other linguistic information to improve the performance of the models.",
"In this paper, we imitate the human reading process in connecting the anaphoric expressions and explicitly leverage the coreference information of the entities to enhance the word embeddings from the pre-trained language model, in order to highlight the coreference mentions of the entities that must be identified for coreference-intensive question answering in QUOREF, a relatively new dataset that is specifically designed to evaluate the coreference-related performance of a model.",
"We use two strategies to fine-tune a pre-trained language model, namely, placing an additional encoder layer after a pre-trained language model to focus on the coreference mentions or constructing a relational graph convolutional network to model the coreference relations.",
"We demonstrate that the explicit incorporation of coreference information in the fine-tuning stage performs better than the incorporation of the coreference information in pre-training a language model.",
"Machine reading comprehension (MRC), a task that automatically identifies one or multiple words from a given passage as the context to answer a specific question for that passage, is widely used in information retrieving, search engines, and dialog systems.",
"Several datasets on MRC that limit the answer to one single word or multiple words from the passage are introduced, including TREC Corresponding author.",
"# Equal contribution.",
"This work was supported in part by the Key Projects of National Natural Science Foundation of China under Grants U1836222 and 61733011.",
"Context: Frankie Bono , a mentally disturbed hitman from Cleveland, comes back to his hometown in New York City during Christmas week to kill a middle-management mobster, Troiano.",
"...First he follows his target to select the best possible location, but opts to wait until Troiano isn't being accompanied by his bodyguards.",
"...",
"Losing his nerve, Frankie calls up his employers to tell them he wants to quit the job.",
"Unsympathetic, the supervisor tells him he has until New Year's Eve to perform the hit.",
"Question: What is the first name of the person who has until New Year's Eve to perform a hit?",
"Answer: he ->FrankieQuestion: What is the first name of the person who follows their target to select the best possible location?",
"Answer: he ->Frankie Table 1: An example from QUOREF: coreference resolution is required to extract the correct answer.",
"(Harman, 1993), SQuAD (Rajpurkar et al., 2018), NewsQA (Trischler et al., 2017), SearchQA (Dunn et al., 2017), and QuAC (Choi et al., 2018), and intensive efforts were made to build new models that surpass the human performance on these datasets, including the pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019a) or the ensemble models that outperform the human, in particular on SQuAD (Lan et al., 2020; Yamada et al., 2020; Zhang et al., 2021).",
"More challenging datasets are also introduced, which require several reasoning steps to answer (Yang et al., 2018; Qi et al., 2021), the understanding of a much larger context (Kocisk et al., 2018) or the understanding of the adversarial content and numeric reasoning (Dua et al., 2019).",
"Human texts, especially long texts, are abound in deictic and anaphoric expressions that refer to the entities in the same text.",
"These deictic and anaphoric expressions, in particular, constrain the generalization of the models trained without explicit awareness of the coreference.",
"The QUOREF dataset (Dasigi et al., 2019) is specifically designed to validate the performance 1281 of the models in coreferential reasoning, in that 78% of the manually analyzed questions cannot be answered without coreference (Dasigi et al., 2019).",
"The example in Table 1 shows that the answers to the two questions cannot be directly retrieved from the sentences because the word in the corresponding sentence of the context is an anaphoric pronoun he , and to obtain the correct answers, tracing of its antecedent Frankie is required.",
"The reasoning in coreference resolution is required to successfully complete the task in machine reading comprehension in the SQuAD-style QUOREF dataset.",
"Pre-trained language models, including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019b), that are trained through self-supervised language modeling objectives like masked language modeling, perform rather poorly in the QUOREF dataset.",
"We argue that the reason for the poor performance is that those pre-trained language models do learn the background knowledge for coreference resolution but may not learn adequately the coreference information required for the coreference-intensive reading comprehension tasks.",
"In the human reading process, as shown in the empirical study of first-year English as a second language students during the reading of expository texts, anaphoric resolution requires a reader to perform a text-connecting task across textual units by successfully linking an appropriate antecedent (among several prior antecedents) with a specific anaphoric referent and students who were not performing well academically were not skilled at resolving anaphors (Pretorius, 2005) and the direct instruction on anaphoric resolution elevated the readers' comprehension of the text (Baumann, 1986).",
"In addition, the studies on anaphor resolution in both adults using eye movement studies (Duffy and Rayner, 1990; van Gompel et al., 2004) and children (Joseph et al., 2015) evidenced a two-stage model of anaphor resolution proposed by Garrod and Terras(Garrod and Terras, 2000).",
"The first stage is an initial lexically driven, context-free stage known as bonding, whereby a link between the anaphor and a potential antecedent is made, followed by a later process known as resolution, which resolves the link with respect to the overall discourse context (Joseph et al., 2015).",
"The pre-trained language models only capture the semantic representations of the words and sentences, without explicitly performing such text-connecting actions in the specific coreference-intensive reading comprehension task, thus they do not learn adequate knowledge to solve the complex coreference reasoning problems.",
"Explicitly injecting external knowledge such as linguistics and knowledge graph entities, has been shown effective to broaden the scope of the pre-trained language models' capacity and performance, and they are often known as X-aware pre-trained language models (Zhang et al., 2020; Liu et al., 2020; Kumar et al., 2021).",
"It is plausible that we may imitate the anaphoric resolution process in human's two-stage reading comprehension of coreference intensive materials and explicitly make the text-connecting task in our fine-tuning stage as the second stage in the machine reading comprehension.",
"As an important tool that captures the anaphoric relationship between words or phrases, coreference resolution that clusters the mentions of the same entity within a given text is an active field in natural language processing (Chen et al., 2011; Sangeetha, 2012; Huang et al., 2019; Joshi et al., 2020; Kirstain et al., 2021), with neural networks taking the lead in the coreference resolution challenges.",
"The incorporation of the coreference resolution results in the pre-training to obtain the coreference-informed pre-trained language models, such as CorefBERT and CorefRoBERTa (Ye et al., 2020), has shown positive improvements on the QUOREF dataset, a dataset that is specially designed for measuring the models' coreference capability, but the performance is still considerably below the human performance.",
"In this paper, we make a different attempt to leverage the coreference resolution knowledge and complete the anaphoric resolution process in reading comprehension.",
"We propose a fine-tuned coref-aware model that directly instructs the model to learn the coreference information 1 .",
"Our model can be roughly divided into three major components: 1) pre-trained language model component.",
"We use the contextualized representations from the pre-trained language models as the token embeddings for the downstream reading comprehension tasks.",
"2) coreference resolution component.",
"NeuralCoref, an extension to the spaCy, is applied here to extract the mention clusters from the context.",
"3) 1 Our codes are publicly available at https://github.",
"coreference enrichment component.",
"We apply three methods in incorporating the coreference knowledge: additive attention enhancement, multiplication attention enhancement, and relation-enhanced graph-attention network + fusing layer.",
"In this paper, we show that by simulating the human behavior in explicitly connecting the anaphoric expressions to the antecedent entities and infusing the coreference knowledge our model can surpass that of the pre-trained coreference language models on the QUOREF dataset.",
"Recent studies on machine reading comprehension mainly rely on the neural network approaches.",
"Before the prevalence of the pre-trained language models, the main focus was to guide and fuse the attentions between questions and paragraphs in their models, in order to gain better global and attended representation (Huang et al., 2018; Hu et al., 2018; Wang et al., 2018).",
"After the advent of the BERT (Devlin et al., 2019), there were two trends in solving the machine reading comprehension.",
"The first trend was to develop better pre-trained language models that captured the representation of contexts and questions (Liu et al., 2019; Yang et al., 2019a; Lewis et al., 2020), and more datasets on question answering were introduced to increase the difficulty in this task, including NewsQA (Trischler et al., 2017), SearchQA (Dunn et al., 2017), QuAC (Choi et al., 2018), HotpotQA (Yang et al., 2018), NarrativeQA (Kocisk et al., 2018), DROP (Dua et al., 2019), and BeerQA (Qi et al., 2021).",
"However, the raw pre-trained language models, being deprived of the in-domain knowledge, the structures and the reasoning capabilities required for the datasets, often perform unsatisfactorily in the hard datasets, being significantly below the human performance.",
"Efforts had been made to boost the model performance by enriching the pre-trained language models with specific syntactic information (Ye et al., 2020) or semantic information.",
"Another trend was to fine-tune the pre-trained language model and added additional layers to incorporate task-specific information for better representation, in particular, the coreference information (Ouyang et al., 2021; Liu et al., 2021).",
"For some questions that have multi-span answers, in other words, a single answer contains two or more discontinuous entities in the context, the BIO (B denotes the start token of the span; I denotes the subsequent tokens and O denotes tokens outside of the span) tagging mechanism is used to identify these answers and improve the model performance (Segal et al., 2020).",
"Recent studies also explored the possibilities of prompt-based learning in machine reading comprehension, including a new pre-training scheme that changed the question answering into a few-shot span selection model (Ram et al., 2021) and a new model that fine-tuned the prompts with knowledge (Chen et al., 2021).",
"The performance of the models using prompt-based learning is significantly higher than the baseline models, but is still below that of the fine-tuned models (Chen et al., 2021).",
"Graph neural network (GNN) captures the relations among the entities in the text by modeling the entities as nodes in the graph and learning the weights via the message passing between the nodes of the graph (Kipf and Welling, 2017; Velickovic et al., 2018).",
"As the dependencies in the natural language text, the relations among entities and knowledge-base triples can be relatively easily modeled in a graph structure, graph neural networks are used for numeric reasoning (Ran et al., 2019), for multi-document question answering by connecting mentions of candidate answers (De Cao et al., 2019), and for multi-hop reasoning by adding the edges with co-occurrence relations(Qiu et al., 2019), or with contextual sentences as embeddings (Tu et al., 2020), or with a hierarchical paragraph-sentence-entity graph (Fang et al., 2020), but none of them had attempted to connect the anaphoric expressions and their antecedents as a coreference resolution strategy in a graph neural network for machine reading comprehension.",
"Our model, inspired by the anaphoric connecting behavior in the human reading comprehension process, consists of four parts, namely, a pre-trained language model, a coreference resolution component, a graph encoder and a fusing layer.",
"Context in the machine reading comprehension task is first processed by a coreference resolution model to identify the underlying coreference clusters, 1283 Figure 1: Coref-aware fine-tuning for machine reading comprehension.",
"which are formed by dividing the entities and anaphoric expressions in the context into disjoint groups on the principle that the mentions of the same entity should be in the same group.",
"Then we use the coreference clusters to construct a coreference matrix that labels each individual cluster and identifies each element in the same cluster with the same cluster number.",
"Meanwhile, the context is tokenized by the tokenizer defined in the pre-trained language model and the embeddings for each token are retrieved from that model.",
"We propose three methods for connecting the anaphoric expressions and their antecedent entity: 1) adding the coreference matrix with each attention head in the additional coreference encoder layer; 2) multiplying the coreference matrix with each attention head in the additional coreference encoder layer; 3) constructing a graph neural network based on the coreference matrix with the edges corresponding to the coreference relations and then fusing the graph representation in the graph neural network with the embeddings of the context, as shown in Figure",
"1. The final representations from either one of the three methods are fed into the classifier to calculate the start/end span of the question.",
"Coreference resolution is the process that identifies all the expressions of the same entity in the text, clusters them together as coreference clusters, and locates their spans.",
"For example, after coreference resolution for the text Losing his nerve, Frankie calls up his employers to tell them he wants to quit the job.",
", we obtained two mention clusters [Frankie: [his, Frankie, his, he], his employers: [his employers, them]] , where Frankie is the head entity and his, Frankie, his, he are all the expressions referring to this entity, as shown in Figure",
"2. As pre-trained language models use subwords in their tokenization and the coreference resolution uses word in the tokenization, a mapping is required to establish the relations.",
"For the input sequence X = { x 1 , ...x n } of length n, the words W = { w 1 , ..., w m } obtained from the coreference tokenization are mapped to the corresponding subwords (tokens) T = { t 1 , ..., t k } from the tokenizer in the pre-trained language model, with one word contains one or more than one subwords.",
"Then we constructed a coreference array with the 2 Image generated from https://huggingface.co/coref/ 1284 Figure 2: Coreference resolution: the red curves connecting the mentions of the same entity and marking the coreference relations.",
"Tokens in the same mention cluster have the same sequence number n in the coreference array.",
"We use the standard relational graph convolutional network (RGCN) (Sejr Schlichtkrull et al., 2018) to obtain the graph representation of the context enriched with coreference information.",
"We use the coreference matrix and the word embeddings to construct a directed and labeled graph G = ( V , E , R ) , with nodes (subwords) v i V , edges(relations) ( v i , r, v j )) E , where r R is one of the two relation types (1 indicates coreference relation and self-loop; 2 indicates global relation), as shown in Figure 3 .",
"basis decomposition to reduce model parameter size and prevent overfitting:",
"h l +1 i = (cid:0) W ( l ) 0 h ( l ) i + (cid:88) r R (cid:88) j N ri 1 c i,r W ( l ) r h ( l ) (cid:1) , W ( l ) r = B (cid:88) b =1 a ( l ) rb V ( l ) b ,",
"where N ri denotes the set of neighbor indices of node i under the relation r R , c i,r is the normalization constant, and W ( l ) r is a linear combination of basis transformation V ( l ) b with coefficient a ( l ) rb .",
"In addition to the Graph Neural Network method, we also explore the possibility of using the self-attention mechanism (Vaswani et al., 2017) to explicitly add an encoder layer and incorporate the coreference information into the attention heads of that layer, so as to guide the model to identify the mentions in the cluster as the same entity.",
"We use two methods to fuse the coreference information and the original embeddings from the pre-trained language model: additive attention fusing and dot product attention fusing (multi-plication).",
"Given the coreference array A = { m 1 , 0 , m 1 , m 2 , 0 , m 2 , m 3 , 0 , m 3 , m 1 ... } , where m n denotes the nth mention cluster, and 0 denotes no mentions, the enriched attention for additive attention fusing is formulated as: Attention ( Q, K, V ) = softmax ( QKT d k + MA ) V, head i = Attention ( QW Qi , KW Ki , V W Vi ) , (3) where MA is a coreference matrix constructed from the coreference array A with the element value 1285 in the matrix calculated by adding (for additive model) or multiplying (for multiplication model) the coreference hyper-parameter coref weight with the original attention weight if the element belongs to the coreference array, Q, K, V are the query, key and value respectively, d k is the dimension of the keys, and W i is trainable parameter.",
"For dot product (multiplication) fusing, it is formulated as: Attention ( Q, K, V ) = softmax ( QKT d k (cid:12) MA ) V, head i = Attention ( QW Qi , KW Ki , V W Vi ) , (4) where we calculate the dot product of QKT d k and a coreference matrix MA constructed from the coreference array A .",
"A machine reading comprehension task expects the model to output the start and end positions of the answer.",
"For the RCGN method, we fuse the hidden state of nodes v i in the last layer of RCGN and the embeddings from the pre-trained language model with a fully-connected (FC) layer , and then calculate the start/end positions of the answer.",
"where E prLM denotes the embeddings from the pre-trained language model, E gnn denotes the embeddings from the graph encoder, P s denotes the predicted start positions, W s denotes the weight matrix and S denotes the text feature.",
"For the two methods that add one additional encoder layer for additive or multiplication attention enrichment, we directly used the output of that encoder layer for the follow-up processing.",
"Following the practice of CorefRoBERTa (Ye et al., 2020) in handling multiple answers for the same question, we use the cross entropy to calculate the losses for each answer if the question has multiple answers: E n = F C ( E prLM , n ) , L s = n (cid:88) i H ( p s i, q s i ) , L e = n (cid:88) i H ( p e i, q e i ) , L total = avg ( L s + L e + L ( E n , n )) , (6) where n denotes the answer count as a hyper parameter for handling multiple answers, E n denotes the results after the linear transformation of the embeddings for the answer count and then we obtains the predicted start positions and end positions from that embeddings, L ( E n , n ) denotes the cross-entropy loss between the transformed embeddings and the answer count, L s denotes the total loss of the start positions, L e denotes the total loss of the end positions and L total denotes the combined total loss.",
"We developed three models based on the sequence-to-sequence Transformer architecture.",
"The pre-trained RoBERTa-large was used as the base model and then we used the following three methods to fine-tune it: 1) Coref GNN : feeding the coreference information into a graph neural network and then fuse the representations; 2) Coref AddAtt : adding the coreference weights with the self-attention weights; 3) Coref MultiAtt : calculating the dot product of the coreference weights with the self-attention weights.",
"We used the results from CorefRoBERTa (Ye et al., 2020) as our baselines.",
"Our coreference resolution was implemented in spaCy (Honnibal and Montani, 2017) and NeuralCoref.",
"NeuralCoref is an extension for spaCy that is trained on the OntoNotes 5.0 dataset based on the training process proposed by Clark and Manning (Clark and Manning, 2016), which identifies the coreference clusters in the text as mentions.",
"In particular, spaCy 2.1.0 and NeuralCoref 4.0 are used, because the latest spaCy version 3.0+ has compatibility issues with NeuralCoref and extra efforts are required to solve the issues.",
"The neural network implementation was implemented in PyTorch (Paszke et al., 2019) and Hugging Face Transformers (Wolf et al., 2020).",
"We used the embeddings of the pre-trained language model RoBERTa LARGE , with the relational graph convolutional network implemented in Deep Graph Library (DGL) (Wang et al., 2020).",
"We used Adam (Kingma and Ba, 2015) as our optimizer, and the learning-rate was {1e-5, 2e-5, 3e-5}.",
"We trained each model for {4, 6} epochs and selected the best checkpoints on the development dataset with Exact match and F1 scores.",
"All experiments were run on 1286 Model Dev Test EM F1 EM F1 QANet 34.41 38.26 34.17 38.90 QANet + BERT BASE 43.09 47.38 42.41 47.20 BERT +BASE 61.29 67.25 61.37 68.56 CorefBERT +BASE 66.87 72.27 66.22 72.96 BERT +LARGE 67.91 73.82 67.24 74.00 CorefBERT +LARGE 70.89 76.56 70.67 76.89 RoBERTa +LARGE 74.15 81.05 75.56 82.11 CorefRoBERTa +LARGE 74.94 81.71 75.80 82.81 Coref GNN 79.23 85.89 78.60 85.15 Coref AddAtt 80.02 86.13 79.11 85.86 Coref MultiAtt 79.85 86.02 78.52 85.27 Table 2: Exact Match and F1 scores of baselines and our proposed models.",
"Our evaluation was performed on the QUOREF dataset (Dasigi et al., 2019).",
"The dataset contains a train set with 3,771 paragraphs and 19,399 questions, a validation set with 454 paragraphs and 2,418 questions, and a test set with 477 paragraphs and 2,537 questions.",
"We quantitatively evaluated the three methods and reported the standard metrics: exact match score (EM) and word-level F1-score (F1) (Rajpurkar et al., 2016).",
"As shown in Table 2, compared with the baseline model CorefRoBERTa, the performance of our models improves significantly.",
"In particular, Coref AddAtt performs best with 5.08%, 4.42% improvements over the baseline model in Exact Match and F1 score respectively on the QUOREF dev set, and 3.05% (F1) and 3.31% (Exact Match) improvements on the QUOREF test set.",
"Coref GNN and Coref MultiAtt also outperform the baseline model by 2.34% (F1) and 2.80% (Exact Match), and 2.46% (F1) and 2.72% (Exact Match) respectively on the test set.",
"Compared with the RoBERTa LARGE that does not use any explicit coreference information in the training or the CorefRoBERTa LARGE that uses the coreference information in the training, the improvements of our model are higher, which proves the effectiveness of the explicit coreference instructions in our strategies.",
"As shown in Table 2, compared with RoBERTa LARGE , our methods added only one component that explicitly incorporates the coreference information, and the three methods we used all exhibit considerable improvements over the baselines.",
"Compared with RoBERTa LARGE which has 354M parameters, Coref AddAtt and the Coref MultiAtt add an encoder layer, which adds over 12M parameters.",
"For the Coref GNN method, we added one hidden layer in GNN and two linear layers to transform the feature dimensions, with around 68.7K parameters in total.",
"Our predictions are that intuitively with more focuses on the coreference clues, the models perform better on the task that requires intensive coreference resolution, as we have explicitly increased the attention weights to connect the words in the same coreference mention clusters.",
"However, the overall performance of the models is also limited by the performance of the coreference component we use, namely, NeuralCoref.",
"To understand the model's performance beyond the automated metrics, we analyze our predicted answers qualitatively.",
"Table 3 compares the representative answers predicted by our models and CorefRoBERTa LARGE .",
"These examples require that the models should precisely locate the entity from several distracting entities for the anaphoric expression that directly answers the questions.",
"Our model demonstrates that, after resolving the anaphoric expression with the antecedents in the context and enhancing with the coreference information by connecting the anaphoric expression with its antecedents, such as the connection from her to Henrietta in the first example and the connection from she to Rihanna in the second example, our model accurately locates the entity name among several names in the context, which the CorefRoBERTa LARGE fails to uncover.",
"We further explored the effects of the anaphoric connections on the attention weights by comparing the attention weights of the sample in the first row in Table 3 between our Coref AddAtt and CorefRoBERTa LARGE model, as shown in Figure 4.",
"It is clear that the anaphoric expressions are not connected in the CorefRoBERTa LARGE model, 1287 Coref-resolved Context (Abbreviated) Question Answers Henrietta take an immediate liking to her, and she asks if Luce can sit by her during the wedding.",
"as indicated by the obtrusive attentions on Rachel and Her in the heatmap on the right of the figure.",
"For the Coref AddAtt , the varying colors on the left heat-map indicate the connection strength among the anaphoric expressions and evidence the effects of explicit coreference addition that smooth and strength the attentions for anaphoric expressions, which contributes to the higher performance of our models.",
"Despite the improvements made by our model, it still fails to predict the correct answers for some questions.",
"We analyzed and summarized several error cases as follows.",
"limitations of the coreference resolution component, NeuralCoref, as its performance had not reached 80% in F1 for MUC, B 3 or CEAF 4 (Clark and Manning, 2016), which is evidenced by the failure in resolving the antecedent of the anaphoric expression its as the academy in the first sample, and the failure in clustering the anaphoric expressions her with the entity Beyonc in the second sample, despite the success in resolving the second Gilman to its antecedent Rockwell \"Rocky\" Gilman .",
"The second type of errors is more complicated, which involves multi-step reasoning that cannot be handled by simply adding the coreference information.",
"To correctly answer the second question, the model should perform two successive tasks successfully: 1) it should understand that Mathew Knowles is the father 1288 Coref-resolved Context (Abbreviated) Question Answers West Point cadet Rockwell \"Rocky\" Gilman is called before a hearing brought after an influential cadet, Raymond Denmore, Jr., is forced to leave the academy...Denmore's attorney, Lew Proctor, attacking the academy and its Honor Code system, declares that Gilman is unfit and possibly criminally liable.",
"of Beyonc ; 2) it should understand the world knowledge that the last name of Beyonc is the same as her father's, which should be Knowles .",
"This type of errors shows that our model performs poorly on the questions that require multi-step reasoning.",
"The third type of errors is caused by the questions that have multiple items in an answer.",
"A hyperparameter that limits the total number of items in an answer is used in our models and this parameter is set to 2 in the training, thus when the number of total items in the answer exceeds 2, our models fail to predict the exact items, and the third item Annie is ignored.",
"In this paper, we present intuitive methods to solve coreference-intensive machine reading comprehension tasks by following the reading process of human in which people connect the anaphoric expressions with explicit instructions.",
"We demonstrate that all our three fine-tuning methods, including Coref GNN , Coref AddAtt and Coref MultiAtt , are superior to the pre-trained language models that incorporate the coreference information in the pretraining stage, such as CorefRoBERTa LARGE .",
"As the fine-tuning methods rely on the coreference resolution models supplied by other researchers, their performance is also constrained by the accuracy of those coreference resolution models.",
"In addition, the questions that require multistep reasoning, span multiple entities or contain multiple answer items also pose the challenges to our models.",
"In the future, with more in-depth study on human reasoning in reading comprehension and more progress in graph neural networks, the GNN-based coreference graph can be enriched with more edge types and diverse structures to leverage more linguistic knowledge and gain better performance.",
"We would like to thank Yuchen He for the help of this work.",
"We also appreciate the valuable feedback from the anonymous reviewers."
] | [
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"other",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters.",
"However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices.",
"In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model.",
"Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning.",
"Basically, MobileBERT is a thin version of BERTLARGE , while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.",
"To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERTLARGE model.",
"Then, we conduct knowledge transfer from this teacher to MobileBERT.",
"Empirical studies show that MobileBERT is 4.3 smaller and 5.5 faster than BERTBASE while achieving competitive results on well-known benchmarks.",
"On the natural language inference tasks of GLUE, MobileBERT achieves a GLUE score of 77 .",
"7 ( 0 . 6 lower than BERTBASE ), and 62 ms latency on a Pixel 4 phone.",
"On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90 .",
"0 / 79 .",
"2 ( 1 . 5 / 2 . 1 higher than BERTBASE ).",
"The NLP community has witnessed a revolution of pre-training self-supervised models.",
"These models usually have hundreds of millions of parameters (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018; Radford et al., 2019; Yang et al., 2019).",
"Among these models, BERT (Devlin et al., 2018) This work was done when the first author was an intern at Google Brain.",
"shows substantial accuracy improvements.",
"However, as one of the largest models ever in NLP, BERT suffers from the heavy model size and high latency, making it impractical for resource-limited mobile devices to deploy the power of BERT in mobile-based machine translation, dialogue modeling, and the like.",
"There have been some efforts that task-specifically distill BERT into compact models (Turc et al., 2019; Tang et al., 2019; Sun et al., 2019; Tsai et al., 2019).",
"To the best of our knowledge, there is not yet any work for building a task-agnostic lightweight pre-trained model, that is, a model that can be generically fine-tuned on different downstream NLP tasks as the original BERT does.",
"In this paper, we propose MobileBERT to fill this gap.",
"In practice, task-agnostic compression of BERT is desirable.",
"Task-specific compression needs to first fine-tune the original large BERT model into a task-specific teacher and then distill.",
"Such a process is much more complicated (Wu et al., 2019) and costly than directly fine-tuning a task-agnostic compact model.",
"At first glance, it may seem straightforward to obtain a task-agnostic compact BERT.",
"For example, one may just take a narrower or shallower version of BERT, and train it until convergence by minimizing a convex combination of the prediction loss and distillation loss (Turc et al., 2019; Sun et al., 2019).",
"Unfortunately, empirical results show that such a straightforward approach results in significant accuracy loss (Turc et al., 2019).",
"This may not be that surprising.",
"It is well-known that shallow networks usually do not have enough representation power while narrow and deep networks are difficult to train.",
"Our MobileBERT is designed to be as deep as BERTLARGE while each layer is made much narrower via adopting bottleneck structures and balancing between self-attentions and feed-forward Multi-HeadAttention Add & Norm FeedForward Add & Norm L x Multi-HeadAttention Add & Norm FeedForward Add & Norm Add & Norm L x Linear Linear Multi-HeadAttention Add & Norm FeedForward Add & Norm Add & Norm L x Linear Linear xF",
"networks (Figure 1).",
"To train MobileBERT, a deep and thin model, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERTLARGE model (IB-BERT).",
"Then, we conduct knowledge transfer from IB-BERT to MobileBERT.",
"A variety of knowledge transfer strategies are carefully investigated in our empirical studies.",
"Empirical evaluations 1 show that MobileBERT is 4.3 smaller and 5.5 faster than BERTBASE , while it can still achieve competitive results on well-known NLP benchmarks.",
"On the natural language inference tasks of GLUE, MobileBERT can achieve a GLUE score of 77 .",
"7 , which is only 0 .",
"6 lower than BERTBASE , with a latency of 62 ms on a Pixel 4 phone.",
"On the SQuAD v1.1/v2.0 question answering task, MobileBER obtains a dev F1 score of 90 .",
"3 / 80 .",
"2 , which is even 1 .",
"5 / 2 .",
"1 higher than BERTBASE .",
"Recently, compression of BERT has attracted much attention.",
"Turc et al. (2019) propose to pre-train the smaller BERT models to improve task-specific knowledge distillation.",
"Tang et al. (2019) distill BERT into an extremely small LSTM model.",
"Tsai et al. (2019) distill a multilingual BERT into smaller BERT models on sequence labeling tasks.",
"Clark et al. (2019b) use several single-task BERT 1 The code and pre-trained models will be available at https://github.com/google-research/ google-research/tree/master/mobilebert .",
"models to teach a multi-task BERT.",
"Liu et al. (2019a) distill knowledge from an ensemble of BERT models into a single BERT.",
"Concurrently to our work, Sun et al. (2019) distill BERT into shallower students through knowledge distillation and an additional knowledge transfer of hidden states on multiple intermediate layers.",
"Jiao et al. (2019) propose TinyBERT, which also uses a layer-wise distillation strategy for BERT but in both pre-training and fine-tuning stages.",
"Sanh et al. (2019) propose DistilBERT, which successfully halves the depth of BERT model by knowledge distillation in the pre-training stage and an optional fine-tuning stage.",
"In contrast to these existing literature, we only use knowledge transfer in the pre-training stage and do not require a fine-tuned teacher or data augmentation (Wu et al., 2019) in the down-stream tasks.",
"Another key difference is that these previous work try to compress BERT by reducing its depth, while we focus on compressing BERT by reducing its width, which has been shown to be more effective (Turc et al., 2019).",
"In this section, we present the detailed architecture design of MobileBERT and training strategies to efficiently train MobileBERT.",
"The specific model settings are summarized in Table 1.",
"These settings are obtained by extensive architecture search experiments which will be presented in Section 4.1.",
"The architecture of MobileBERT is illustrated in Figure",
"1(c).",
"It is as deep as BERTLARGE , but each building block is made much smaller.",
"As shown in Table 1, the hidden dimension of each building block is only 128.",
"On the other hand, we introduce two linear transformations for each building block to adjust its input and output dimensions to 512.",
"Following the terminology in (He et al., 2016), we refer to such an architecture as bottleneck.",
"It is challenging to train such a deep and thin network.",
"To overcome the training issue, we first construct a teacher network and train it until convergence, and then conduct knowledge transfer from this teacher network to MobileBERT.",
"We find that this is much better than directly training MobileBERT from scratch.",
"Various training strategies will be discussed in a later section.",
"Here, we introduce the architecture design of the teacher network which is illustrated in Figure",
"1(b).",
"In fact, the teacher network is just BERTLARGE while augmented with inverted -bottleneck structures (San-dler et al., 2018) to adjust its feature map size to 512.",
"In what follows, we refer to the teacher network as IB-BERTLARGE .",
"Note that IB-BERT and MobileBERT have the same feature map size which is 512.",
"Thus, we can directly compare the layer-wise output difference between IB-BERT and MobileBERT.",
"Such a direct comparison is needed in our knowledge transfer strategy.",
"It is worth pointing out that the simultaneously introduced bottleneck and inverted-bottleneck structures result in a fairly flexible architecture design.",
"One may either only use the bottlenecks for MobileBERT (correspondingly the teacher becomes BERTLARGE ) or only the inverted-bottlenecks for IB-BERT (then there is no bottleneck in MobileBERT) to align their feature maps.",
"However, when using both of them, we can allow IB-BERTLARGE to preserve the performance of BERTLARGE while having MobileBERT suffi-ciently compact.",
"A problem introduced by the bottleneck structure of MobileBERT is that the balance between the Multi-Head Attention (MHA) module and the FeedForward Network (FFN) module is broken.",
"MHA and FFN play different roles in the Transformer architecture: The former allows the model to jointly attend to information from different subspaces, while the latter increases the non-linearity of the model.",
"In original BERT, the ratio of the parameter numbers in MHA and FFN is always 1:2.",
"But in the bottleneck structure, the inputs to the MHA are from wider feature maps (of inter-block size), while the inputs to the FFN are from narrower bottlenecks (of intra-block size).",
"This results in that the MHA modules in MobileBERT relatively contain more parameters.",
"To fix this issue, we propose to use stacked feedforward networks in MobileBERT to re-balance the relative size between MHA and FFN.",
"As illustrated in Figure",
"1(c), each MobileBERT layer contains one MHA but several stacked FFN.",
"In MobileBERT, we use 4 stacked FFN after each MHA.",
"By model latency analysis 2 , we find that layer normalization (Ba et al., 2016) and gelu activation (Hendrycks and Gimpel, 2016) accounted for a considerable proportion of total latency.",
"Therefore, we propose to replace them with new operations in our MobileBERT.",
"Remove layer normalization We replace the layer normalization of a n -channel hidden state h with an element-wise linear transformation: NoNorm ( h ) = h + , (1) where , R n and denotes the Hadamard product.",
"Please note that NoNorm has different properties from LayerNorm even in test mode since the original layer normalization is not a linear operation for a batch of vectors.",
"Use relu activation We replace the gelu activation with simpler relu activation (Nair and Hinton, 2010).",
"The embedding table in BERT models accounts for a substantial proportion of model size.",
"To compress the embedding layer, as shown in Table 1, we reduce the embedding dimension to 128 in MobileBERT.",
"Then, we apply a 1D convolution with kernel size 3 on the raw token embedding to produce a 512 dimensional output.",
"We propose to use the following two knowledge transfer objectives, i.e., feature map transfer and attention transfer, to train MobileBERT.",
"Figure 1 illustrates the proposed layer-wise knowledge transfer objectives.",
"Our final layer-wise knowledge transfer loss L (cid:96)KT for the (cid:96) th layer is a linear combination of the two objectives stated below: Feature Map Transfer (FMT) Since each layer in BERT merely takes the output of the previous layer as input, the most important thing in layer-wise knowledge transfer is that the feature maps of each layer should be as close as possible to those of the teacher.",
"In particular, the mean squared error between the feature maps of the MobileBERT 2 A detailed analysis of effectiveness of operational optimizations on real-world inference latency can be found in Section 4.6.1.",
"student and the IB-BERT teacher is used as the knowledge transfer objective: L (cid:96)FMT = 1 T NT (cid:88) t =1 N (cid:88) n =1 ( H trt,(cid:96),n H stt,(cid:96),n ) 2 , (2) where (cid:96) is the index of layers, T is the sequence length, and N is the feature map size.",
"In practice, we find that decomposing this loss term into normalized feature map discrepancy and feature map statistics discrepancy can help stabilize training.",
"Attention Transfer (AT) The attention mechanism greatly boosts the performance of NLP and becomes a crucial building block in Transformer and BERT (Clark et al., 2019a; Jawahar et al., 2019).",
"This motivates us to use self-attention maps from the well-optimized teacher to help the training of MobileBERT in augmentation to the feature map transfer.",
"In particular, we minimize the KL-divergence between the per-head self-attention distributions of the MobileBERT student and the IB-BERT teacher: L (cid:96)AT = 1 T AT (cid:88) t =1 A (cid:88) a =1 DKL ( a trt,(cid:96),a || a stt,(cid:96),a ) , (3) where A is the number of attention heads.",
"Pre-training Distillation (PD) Besides layer-wise knowledge transfer, we can also use a knowledge distillation loss when pre-training MobileBERT.",
"We use a linear combination of the original masked language modeling (MLM) loss, next sentence prediction (NSP) loss, and the new MLM Knowledge Distillation (KD) loss as our pretraining distillation loss: LPD = LMLM + (1 ) LKD + LNSP , (4) where is a hyperparameter in (0 , 1) .",
"Given the objectives defined above, there can be various combination strategies in training.",
"We discuss three strategies in this paper.",
"Auxiliary Knowledge Transfer In this strategy, we regard intermediate knowledge transfer as an auxiliary task for knowledge distillation.",
"We use a single loss, which is a linear combination of knowledge transfer losses from all layers as well as the pre-training distillation loss.",
"Joint Knowledge Transfer However, the intermediate knowledge of the IB-BERT teacher (i.e. attention maps and feature maps) may not be an optimal solution for the MobileBERT student.",
"Therefore, we propose to separate these two loss terms, where we first train MobileBERT with all layer-wise knowledge transfer losses jointly, and then further train it by pre-training distillation.",
"Progressive Knowledge Transfer One may also concern that if MobileBERT cannot perfectly mimic the IB-BERT teacher, the errors from the lower layers may affect the knowledge transfer in the higher layers.",
"Therefore, we propose to progressively train each layer in the knowledge transfer.",
"The progressive knowledge transfer is divided into L stages, where L is the number of layers.",
"Diagram of three strategies Figure 2 illustrates the diagram of the three strategies.",
"For joint knowledge transfer and progressive knowledge transfer, there is no knowledge transfer for the beginning embedding layer and the final classifier in the layer-wise knowledge transfer stage.",
"They are copied from the IB-BERT teacher to the MobileBERT student.",
"Moreover, for progressive knowledge transfer, when we train the (cid:96) th layer, we freeze all the trainable parameters in the layers below.",
"In practice, we can soften the training process as follows.",
"When training a layer, we further tune the lower layers with a small learning rate rather than entirely freezing them.",
"In this section, we first present our architecture search experiments which lead to the model settings in Table 1, and then present the empirical",
"We conduct extensive experiments to search good model settings for the IB-BERT teacher and the MobileBERT student.",
"We start with SQuAD v1.1 dev F1 score as the performance metric in the search of model settings.",
"In this section, we only train each model for 125k steps with 2048 batch size, which halves the training schedule of original BERT (Devlin et al., 2018; You et al., 2019).",
"Architecture Search for IB-BERT Our design philosophy for the teacher model is to use as small inter-block hidden size (feature map size) as possible, as long as there is no accuracy loss.",
"Under this guideline, we design experiments to manipulate the inter-block size of a BERTLARGE -sized IB-BERT, and the results are shown in Table 2 with labels",
"(a)-(e).",
"We can see that reducing the interblock hidden size doesn't damage the performance h intra #Head (#Params) #FFN (#Params) SQuAD 192 6 (8M) 1 (7M) 82.6 160 5 (6.5M) 2 (10M) 83.4 128 4 (5M) 4 (12.5M) 83.4 96 3 (4M) 8 (14M) 81.6 Table 3: Experimental results on SQuAD v1.1 dev F1 score in search of good model settings for the MobileBERT student.",
"of BERT until it is smaller than 512.",
"Hence, we choose IB-BERTLARGE with its inter-block hidden size being 512 as the teacher model.",
"One may wonder whether we can also shrink the intra-block hidden size of the teacher.",
"We conduct experiments and the results are shown in Table 2 with labels",
"(f)-(i).",
"We can see that when the intra-block hidden size is reduced, the model performance is dramatically worse.",
"This means that the intra-block hidden size, which represents the representation power of non-linear modules, plays a crucial role in BERT.",
"Therefore, unlike the interblock hidden size, we do not shrink the intra-block hidden size of our teacher model.",
"Architecture Search for MobileBERT We seek a compression ratio of 4 for BERTBASE , so we design a set of MobileBERT models all with approximately 25M parameters but different ratios of the parameter numbers in MHA and FFN to select a good MobileBERT student model.",
"Table 3 shows our experimental results.",
"They have different balances between MHA and FFN.",
"From the table, we can see that the model performance reaches the peak when the ratio of parameters in MHA and FFN is 0.4 0.6.",
"This may justify why the original Transformer chooses the parameter ratio of MHA and FFN to 0.5.",
"We choose the architecture with 128 intra-block hidden size and 4 stacked FFNs as the MobileBERT student model in consideration of model accuracy and training efficiency.",
"We also accordingly set the number of attention heads in the teacher model to 4 in preparation for the layer-wise knowledge transfer.",
"Table 1 demonstrates the model settings of our IB-BERTLARGE teacher and MobileBERT student.",
"One may wonder whether reducing the number of heads will harm the performance of the teacher model.",
"By comparing",
"(a) and",
"(f) in Table 2, we can see that reducing the number of heads from 16 to 4 does not affect the performance of IB-BERTLARGE .",
"Following BERT (Devlin et al., 2018), we use the BooksCorpus (Zhu et al., 2015) and English Wikipedia as our pre-training data.",
"To make the IB-BERTLARGE teacher reach the same accuracy as original BERTLARGE , we train IB-BERTLARGE on 256 TPU v3 chips for 500k steps with a batch size of 4096 and LAMB optimizer (You et al., 2019).",
"For a fair comparison with the original BERT, we do not use training tricks in other BERT variants (Liu et al., 2019b; Joshi et al., 2019).",
"For MobileBERT, we use the same training schedule in the pre-training distillation stage.",
"Additionally, we use progressive knowledge transfer to train MobileBERT, which takes additional 240k steps over 24 layers.",
"In ablation studies, we halve the pretraining distillation schedule of MobileBERT to accelerate experiments.",
"Moreover, in the ablation study of knowledge transfer strategies, for a fair comparison, joint knowledge transfer and auxiliary knowledge transfer also take additional 240k steps.",
"For the downstream tasks, all reported results are obtained by simply fine-tuning MobileBERT just like what the original BERT does .",
"To fine-tune the pre-trained models, we search the optimization hyperparameters in a search space including different batch sizes (16/32/48), learning rates ((1-10) * e-5), and the number of epochs (2-10).",
"The search space is different from the original BERT because we find that MobileBERT usually needs a larger learning rate and more training epochs in fine-tuning.",
"We select the model for testing according to their performance on the development (dev) set.",
"The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of 9 natural language understanding tasks.",
"We compare MobileBERT with BERTBASE and a few state-of-the-art pre-BERT models on the GLUE leaderboard 3 : OpenAI GPT (Radford et al., 2018) and ELMo (Peters et al., 2018).",
"We also compare with three recently proposed compressed BERT models: BERT-PKD (Sun et al., 2019), and DistilBERT (Sanh et al., 2019).",
"To further show the advantage of MobileBERT over recent small BERT models, we also evaluate a smaller variant of our 3 https://gluebenchmark.com/leaderboard #Params #FLOPS Latency CoLA SST-2 MRPC STS-B QQP MNLI-m/mm QNLI RTE GLUE 8.5k 67k 3.7k 5.7k 364k 393k 108k 2.5k ELMo-BiLSTM-Attn --33.6 90.4 84.4 72.3 63.1 74.1/74.5 79.8 58.9 70.0 OpenAI GPT 109M -47.2 93.1 87.7 84.8 70.1 80.7/80.6 87.2 69.1 76.9 BERTBASE 109M 22.5B 342 ms 52.1 93.5 88.9 85.8 71.2 84.6 / 83.4 90.5 66.4 78.3 BERTBASE -6L-PKD* 66.5M 11.3B -92.0 85.0 -70.7 81.5/81.0 89.0 65.5 BERTBASE -4L-PKD * 52.2M 7.6B -24.8 89.4 82.6 79.8 70.2 79.9/79.3 85.1 62.3 BERTBASE -3L-PKD* 45.3M 5.7B -87.5 80.7 -68.1 76.7/76.3 84.7 58.2 DistilBERT BASE -6L 62.2M 11.3B -92.0 85.0 70.7 81.5/81.0 89.0 65.5 DistilBERT BASE -4L 52.2M 7.6B -32.8 91.4 82.4 76.1 68.5 78.9/78.0 85.2 54.1 TinyBERT* 14.5M 1.2B -43.3 92.6 86.4 79.9 71.3 82.5/81.8 87.7 62.9 75.4 MobileBERT TINY 15.1M 3.1B 40 ms 46.7 91.7 87.9 80.1 68.9 81.5/81.6 89.5 65.1 75.8 MobileBERT 25.3M 5.7B 62 ms 50.5 92.8 88.8 84.4 70.2 83.3/82.6 90.6 66.2 77.7 MobileBERT w/o OPT 25.3M 5.7B 192 ms 51.1 92.6 88.8 84.8 70.5 84.3/ 83.4 91.6 70.4 78.5 Table 4: The test results on the GLUE benchmark (except WNLI).",
"model with approximately 15M parameters called MobileBERT TINY4 , which reduces the number of FFNs in each layer and uses a lighter MHA structure.",
"Besides, to verify the performance of MobileBERT on real-world mobile devices, we export the models with TensorFlow Lite 5 APIs and measure the inference latencies on a 4-thread Pixel 4 phone with a fixed sequence length of 128.",
"The results are listed in Table 4.",
"6 From the table, we can see that MobileBERT is very competitive on the GLUE benchmark.",
"MobileBERT achieves an overall GLUE score of 77.7, which is only 0.6 lower than BERTBASE , while be-4 The detailed model setting of MobileBERT TINY can be found in Table 1 and in the appendix.",
"ing 4.3 smaller and 5.5 faster than BERTBASE .",
"Moreover, It outperforms the strong OpenAI GPT baseline by 0.8 GLUE score with 4 .",
"3 smaller model size.",
"It also outperforms all the other compressed BERT models with smaller or similar model sizes.",
"Finally, we find that the introduced operational optimizations hurt the model performance a bit.",
"Without these optimizations, MobileBERT can even outperforms BERTBASE by 0.2 GLUE score.",
"SQuAD is a large-scale reading comprehension datasets.",
"SQuAD1.1 (Rajpurkar et al., 2016) only contains questions that always have an answer in the given context, while SQuAD2.0 (Rajpurkar et al., 2018) contains unanswerable questions.",
"We evaluate MobileBERT only on the SQuAD dev datasets, as there is nearly no single model submission on SQuAD test leaderboard.",
"We compare our MobileBERT with BERTBASE , DistilBERT, and a strong baseline DocQA (Clark and Gardner, 2017).",
"We apply the standard post-training quantization in TensorFlow Lite to MobileBERT.",
"The results are shown in Table 6.",
"We find that while quantization can further compress MobileBERT by 4 , there is nearly no performance degradation from it.",
"This indicates that there is still a big room in the compression of MobileBERT.",
"We evaluate the effectiveness of the two operational optimizations introduced in Section 3.3, i.e., replacing layer normalization ( LayerNorm ) with NoNorm and replacing gelu activation with relu activation.",
"We report the inference latencies using the same experimental setting as in Section 4.6.1.",
"From Table 7, we can see that both NoNorm and relu are very effective in reducing the latency of MobileBERT, while the two operational optimizations do not reduce FLOPS.",
"This reveals the gap between the real-world inference latency and the theoretical computation overhead (i.e., FLOPS).",
"We also study how the choice of training strategy, i.e., auxiliary knowledge transfer, joint knowledge transfer, and progressive knowledge transfer, can affect the performance of MobileBERT.",
"As shown MNLI-m QNLI MRPC SST-2 BERTLARGE 86.6 92.1 87.8 93.7 IB-BERTLARGE 87.0 93.2 87.3 94.1 BERTBASE 84.4 91.1 86.7 92.9 MobileBERT (bare) 80.8 88.2 84.3 90.1 + PD 81.1 88.9 85.5 91.7 + PD + FMT 83.8 91.1 87.0 92.2 + PD + FMT + AT 84.4 91.5 87.0 92.5 Table 9: Ablation on the dev sets of GLUE benchmark.",
"in Table 8, progressive knowledge transfer consistently outperforms the other two strategies.",
"We notice that there is a significant performance gap between auxiliary knowledge transfer and the other two strategies.",
"We think the reason is that the intermediate layer-wise knowledge (i.e., attention maps and feature maps) from the teacher may not be optimal for the student, so the student needs an additional pre-training distillation stage to fine-tune its parameters.",
"We finally conduct a set of ablation experiments with regard to Attention Transfer (AT), Feature Map Transfer (FMT) and Pre-training Distillation (PD).",
"The operational OPTimizations (OPT) are removed in these experiments to make a fair comparison between MobileBERT and the original BERT.",
"The results are listed in Table 9.",
"We can see that the proposed Feature Map Transfer contributes most to the performance improvement of MobileBERT, while Attention Transfer and Pre-training Distillation also play positive roles.",
"We can also find that our IB-BERTLARGE teacher is as powerful as the original IB-BERTLARGE while MobileBERT degrades greatly when compared to its teacher.",
"So we believe that there is still a big room in the improvement of MobileBERT.",
"We have presented MobileBERT which is a task-agnostic compact variant of BERT.",
"Empirical results on popular NLP benchmarks show that MobileBERT is comparable with BERTBASE while being much smaller and faster.",
"MobileBERT can enable various NLP applications 7 to be easily deployed on mobile devices.",
"In this paper, we show that 1) it is crucial to keep MobileBERT deep and thin, 2) bottleneck/inverted-bottleneck structures enable effective layer-wise knowledge transfer, and 3) progressive knowledge transfer can efficiently train MobileBERT.",
"We believe our findings are generic and can be applied to other model compression problems."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method"
] |
[
"As large Pre-trained Language Models (PLMs) trained on large amounts of data in an unsupervised manner become more ubiquitous, identifying various types of bias in the text has come into sharp focus.",
"Existing Stereotype Detection ' datasets mainly adopt a diagnostic approach toward large PLMs.",
"Blodgett et al. (2021a) show that there are significant reliability issues with the existing benchmark datasets.",
"Annotating a reliable dataset requires a precise understanding of the subtle nuances of how stereotypes manifest in text.",
"In this paper, we annotate a focused evaluation set for Stereotype Detection ' that addresses those pitfalls by de-constructing various ways in which stereotypes manifest in text.",
"Further, we present a multi-task model that leverages the abundance of data-rich neighboring tasks such as hate speech detection, offensive language detection, misogyny detection, etc., to improve the empirical performance on Stereotype Detection '.",
"We then propose a reinforcement-learning agent that guides the multi-task learning model by learning to identify the training examples from the neighboring tasks that help the target task the most.",
"We show that the proposed models achieve significant empirical gains over existing baselines on all the tasks.",
"Following Peters et al. (2018) and then Vaswani et al. (2017), several empirically effective large pre-trained language models (PLMs) (Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019; Lewis et al., 2020; Brown et al., 2020) have been proposed.",
"Their empirical success on several existing tasks has resulted in them being ubiquitously used in many NLP applications that interact with humans on a daily basis (Olteanu et al., 2020).",
"As they are trained in an unsupervised manner on large amounts of arbitrary web data, harmful language This work is a part of summer internship at Microsoft Research, Redmond and bias creeps into their output.",
"This, in turn, is reflected in the applications that are built on top of them and further propagated in society through the use of those applications (Berk, 2017).",
"This issue motivates the need for a two-pronged solution: 1) to diagnose and de-noise the bias in the PLMs and 2) to identify and regulate harmful text externally at the output level.",
"Progress on the first front is necessary to de-bias the PLMs and prevent subtle biases that creep in through their distributed representations which can often only be recognized in retrospect.",
"This results in deep distrust of such systems among general population.",
"The second front is paramount to detect and potentially block harmful output and also ensure that future iterations and novel proposals of the language models don't take spurious paths to gaming the de-biasing techniques.",
"In this work, we focus on identifying harmful text with stereotypical associations externally.",
"S 1 : She may or may not be a jew but she's certainly cheap!",
"( insult, stereotype )",
"S 2 : Burn in hell, you Asian bastard!",
"( abuse, stereotype )",
"S 3 : Asians are good at math.",
"( stereotype )",
"S 4 : My African-American friend owns a watermelon patch.",
"( stereotype )",
"There exist several types of harmful language such as hate-speech, misogyny, stereotypes, abuse, threats, insult etc,.",
"Each type of offensive language has subtle linguistic nuances that are specific to the type of offensive language.",
"Often, offensive text contains multiple types of offense.",
"From the examples above, consider S 1 and S 2 .",
"Both, consist of multiple modes of offense.",
"While S 3 is purely a stereotype, it is still undesirable to be perpetuated.",
"Cardwell (1996) defines stereotype as a fixed, over-generalized belief about a particular group or class of people .",
"Stereotypes differ from other types of offensive text in two key aspects: ( 1 ) they require knowledge of their existence in the society to be identified, and ( 2 ) they might also often 6703 express positive sentiment about the target group.",
"Although some stereotypes ostensibly express positive sentiment towards the target group, they are still undesirable as they propagate false biases in the society and are offensive to the target group.",
"Consider sentences S 3 and S 4 from above examples.",
"While S 3 expresses positive sentiment, it is still false and undesirable.",
"S 4 requires knowledge of that particular stereotype's history to understand its offensive nature.",
"Requiring prior knowledge makes annotating data for the task of Stereotype Detection ' harder, as annotators are unlikely to be aware of all the stereotypes that exist in the society.",
"(Czopp, 2008).",
"Two recent works have proposed pioneering diagnostic datasets for measuring stereotypical bias of large PLMs (Nadeem et al., 2020; Nangia et al., 2020).",
"But, Blodgett et al. (2021b) has demonstrated that these datasets suffer from two major types of issues: ( 1 ) conceptual: include harmless stereotypes, artificial anti-stereotypes, confusing nationality with ethnicity etc, and ( 2 ) operational: invalid perturbations, unnatural text, incommensurable target groups etc,.",
"In addition, diagnostic datasets also suffer from lack of sufficient coverage of subtle nuances of manifestations of stereotypes in text.",
"This makes them less suitable for training an effective discriminative classifier.",
"Hence, we undertake a focused annotation effort to create a fine-grained evaluation dataset.",
"We mainly aim to alleviate the conceptual issues of antivs. non-stereotypes, containing irrelevant stereotypes and operational issues of unnatural text, invalid perturbations.",
"We achieve this by a mix of ( 1 ) selecting more appropriate data candidates and ( 2 ) devising a focused questionnaire for the annotation task that breaks down different dimensions of the linguistic challenge of Stereotype Identification '.",
"Collecting real-world data from the social forum Reddit for annotation also results in better coverage of subtle manifestations of stereotypes in text.",
"Although stereotypes differ from other types of offensive language in multiple ways, they also overlap to a significant extent.",
"Often, various types of offensive text such as abuse, misogyny and hate speech integrally consists stereotypical associations.",
"Abundance of high-quality annotated datasets are available for these neighboring tasks.",
"We leverage this unique nature of Stereotype Detection task to propose a multi-task learning framework for all related tasks.",
"As the overlap between the tasks is only partial, we then propose a reinforcement learning agent that learns to guide the multi-task learning model by selecting meaningful data examples from the neighboring task datasets that help in improving the target task.",
"We show that these two modifications improve the empirical performance on all the tasks significantly.",
"Then, we look more closely at the reinforcement-learning agent's learning process via a suite of ablation studies that throw light on its intricate inner workings.",
"To summarize, our main contributions are:",
"1. We devise a focused annotation effort for Stereotype Detection to construct a fine-grained evaluation set for the task.",
"2. We leverage the unique existence of several correlated neighboring tasks to propose a reinforcement-learning guided multitask framework that learns to identify data examples that are beneficial for the target task.",
"3. We perform exhaustive empirical evaluation and ablation studies to demonstrate the effectiveness of the framework and showcase intricate details of its learning process.",
"1 2 Related Work With the rise of social media and hate speech forums online (Phadke and Mitra, 2020; Szendro, 2021) offensive language detection has become more important that ever before.",
"Several recent works focus on characterizing various types of offensive language detection (Fortuna and Nunes, 2018; Shushkevich and Cardiff, 2019; Mishra et al., 2019; Parekh and Patel, 2017).",
"But, works that focus solely on Stereotype Detection in English language are scarce.",
"This is partly because stereotypes tend to be subtler offenses in comparison to other types are offensive languages and hence receive less immediate focus, and in part due to the challenge of requiring the knowledge of the stereotype's existence in society to reliably annotate data for the task.",
"We approach this problem by breaking down various aspects of stereotypical text and crowd-sourcing annotations only for aspects that require linguistic understanding rather than world-knowledge.",
"2020) while others worked on knowledge-based and semi-supervised learning based models (Fraser et al., 2021; Badjatiya et al., 2019) for identifying stereotypical text.",
"Computational model based works either use datasets meant for other tasks such as hate speech detection etc, or focus mainly on the available diagnostic datasets modified for classification task.",
"But, diagnostic datasets suffer from lack of sufficient coverage of naturally occurring text due to their crowd-sourced construction procedure (Blodgett et al., 2021b).",
"We address these issues in our work by collecting natural text data from social forum Reddit, by mining specific subreddits that contain mainly subtle stereotypical text.",
"Multi-task learning (Caruana, 1997), can be broadly classified into two paradigms (Ruder, 2017): hard parameter sharing (Caruana, 1997) and soft parameter sharing (Yang and Hospedales, 2016; Duong et al., 2015).",
"We implement hard-parameter sharing based multi-task model for our experiments.",
"Given the low-resource setting on Stereotype Detection task, semi-supervised data annotation is one plausible solution for the problem.",
"Several recent works have also been focusing on reinforcement-learning guided semi-supervision (Ye et al., 2020; Konyushkova et al., 2020; Laskin et al., 2020).",
"Ye et al. (2020), in particular, work with a single-task and unsupervised data to generate automated-annotations for new examples.",
"In contrast, we use the data from neighboring tasks with different labels for multi-task learning and apply an RL agent to select examples for training the neighboring task in such a way that benefits the target task the most.",
"As Blodgett et al. (2021a) demonstrates, existing diagnostic datasets such as Stereoset (Nadeem et al., 2020) and CrowS-Pairs (Nangia et al., 2020) suffer from several conceptual and operational issues.",
"Additionally diagnostic datasets, by nature, also suffer from lack of coverage of subtle manifestations of stereotypes in text.",
"But, this is mandatory in the dataset used to train an effective discriminatory classifier that can generalize well to identifying stereotypical associations in open domain use cases.",
"This is often the case in the examples encountered in downstream applications that interact with humans on a daily basis.",
"Nadeem et al. (2020) and Nangia et al. (2020) both follow the following dataset construction procedure: they provide a prompt word or phrase, which is the target group of the stereotype, and ask the crowd workers to construct a stereotypical sentence which includes the provided prompt.",
"Then, they either flip the identity of the target group (Nan-gia et al., 2020), the associated characteristic/action of the group or both (Nadeem et al., 2020) to obtain negative ( anti-stereotypes ) examples.",
"Although this process results in good coverage of existing stereotypes within society due to the variety of prompts provided, it also results in data examples that either explicitly state the stereotype or are anti-stereotypes that are mostly contrived sentences and are unlikely to occur in regular discourse.",
"An example from Stereoset dataset (Nadeem et al., 2020) is shown below to illustrate this point.",
"1 is an explicitly states stereotype where as 2 and 3 are unlikely to occur in regular discourse.",
"1 .",
"He is an Arab.",
"Must be a terrorist.",
"( stereotype ) 2 .",
"He is an Arab.",
"Must be a pacifist.",
"( anti-stereotype ) 3 .",
"He is an Arab.",
"Dog needs walk.",
"( unrelated )",
"We address this issue by additionally annotating text from actual discussions from Reddit to create a more representative dataset for the task.",
"We use the collected examples for a crowd-sourced annotation task to build the dataset.",
"The annotation task is explained in further detail below.",
"We use the prompts from Nadeem et al. (2020) and Nangia et al. (2020) to collect candidates from selected subreddits which are likely to contain a good coverage of stereotypical and non-stereotypical text.",
"We also use the explicit stereotypes from the Stereoset and CrowS Pairs datasets as candidates.",
"Then, we design a focused annotation task for these candidates 6705 to identify various types of Stereotypes.",
"We make the key observation that there are three broad classes of text that contain potentially harmful over-simplified beliefs: 1 ) text that is mainly intended to express a stereotype ( explicit stereotype ), 2 ) text that is not mainly intended to convey a stereotype but nevertheless propagates a stereotypical association ( implicit stereotypical association ), and 3 ) text that expresses an over-simplified belief which is not that widely-known, hence is not considered stereotypical ( anti-stereotype ).",
"In addition, there is 4 ) text that doesn't contain any over-simplified beliefs about a group of people.",
"We call this type of text as non-stereotypes .",
"Examples of different types text are shown in table",
"1. Nadeem et al. (2020) and Nangia et al. (2020) are mainly made up of explicit-stereotypes and anti-stereotypes .",
"They lack coverage of implicity stereotypical associations and non-stereotypes due to their construction process and the nature of diagnostic datasets.",
"These are necessary to build a task-representative classification dataset.",
"Hence, in our annotation task we aim to add non-stereotypical examples that contain the same prompts as the ones that are used to create the stereotypical examples.",
"To this end, we collect candidate examples from two subreddit forums /r/Jokes and /r/AskHistorians .",
"We observe that /r/Jokes consists of a high percentage of text with stereotypical associations (both explicit and implicit stereotypes) and /r/AskHistorians tends to consist mainly factual text that is carefully constructed to avoid stereotypical associations.",
"We collect examples that contain the prompts provided in the Nadeem et al. (2020) dataset as candidates for annotation.",
"We also use the explicit stereotypes from Stereoset and CrowS-Pairs datasets for annotation.",
"We perform annotation using Amazon Mechanical Truk .",
"For each candidate sentence, we ask the annotators to answer the following questions:",
"1. Is there an over-simplified belief about a particular type of person intentionally expressed in the text?",
"2. Is there an unintentional, widely-known stereotypical association present in the text?",
"3. Does the sentence seem made up (unlikely to occur in regular discourse)?",
"Each example is annotated by three annotators and we use the majority answer as the gold label.",
"This annotation allows us to separate the text into one of the above 4 categories.",
"Our dataset consists of 742 explicit stereotypes, 282 implicit stereotypes and 1 , 197 non-stereotypes.",
"We show the summary statistics of the annotated dataset in table",
"2. 3.2 Ethics Statement We conducted a qualification test to select workers based on their performance.",
"The workers were paid a bonus of USD 0 .",
"10 for taking the qualification text.",
"We paid USD 0 .",
"25 for a batch of 10 examples, each batch taking 45 60 seconds on average.",
"This amounts to USD 15 20 /hour.",
"We displayed a warning on the task that said that the task might contain potentially offensive language.",
"We didn't collect any personal identifying information of the workers other than their worker ID for assigning qualifications.",
"We restricted the workers location to the USA with minimum of 5 , 000 approved HITs and 98% HIT approval rate.",
"As discussed in section 1, high-quality gold data for Stereotype Detection is scarce.",
"But, several tasks with correlating objectives have abundance of high-quality annotated datasets.",
"We observe that several tasks under the general umbrella of Offensive Language Detection such as Abuse Detection , Hate Speech Detection & Misogyny Detection often include text with stereotypical associations, as demonstrated in examples S 1 and S 2 in section",
"1. We call these tasks neighboring tasks .",
"We leverage the neighboring task datasets to improve the performance on the low-resource setting of Stereotype Detection .",
"First, we propose a multi-task learning model for all the tasks.",
"Then, we make the key observation that all examples from the neighboring tasks are not equally useful for the target task as the objectives only overlap partially.",
"Further, we propose a reinforcement-learning agent, inspired from Ye et al. (2020), that learns to select data examples from the neighboring task datasets which are most relevant to the target task's learning ob-6706 jective.",
"We guide the agent via reward assignment based on shared model's performance on the evaluation data of the target task.",
"We experiment both the settings with 4 popular large PLMs as base classifiers and demonstrate empirical gains using this framework.",
"In subsection 4.1, we describe the multi-task learning (MTL) model followed by the Reinforcement Learning guided multi-task learning model (RL-MTL) in subsection 4.2.",
"Then, in subsection 5.1, we describe the baseline classifiers we use for our experiments.",
"The motivation behind our Multi-Task Learning model is to leverage the transfer learning gains from the neighboring tasks to improve the target task.",
"As the tasks have partially overlapping objectives, solving the selected neighboring tasks effectively requires an understanding of largely similar linguistic characteristics as the target task.",
"Hence, leveraging the intermediate representations of the text from the neighboring task to boost the classifier is expected to benefit the target task.",
"Following this motivation, our proposed multitask model consists of a fixed PLM-based representation layer, followed by shared parameters that are common for all the tasks.",
"Then, we add separate classification heads for each task.",
"We implement hard parameter sharing (Caruana, 1997; Ruder, 2017) in our model.",
"The shared parameters compute intermediate representations for the text input.",
"These intermediate representations are shared by all the tasks.",
"Parameters for the shared representation layers are first optimized by training on the neighboring tasks.",
"Then, they are leveraged as a more beneficial parameter initialization for training on the target task data.",
"The input to the multi-task model is the text of the data example and a task ID.",
"Output of the model is predicted label on the specified task.",
"Each task in the model could either be a single-class classification task or a multi-label classification task.",
"Classification heads for single-class classification tasks have a softmax layer after the final layer.",
"Multi-label tasks have a sigmoid layer for each output neuron in the final layer of the classification heads.",
"First, we jointly train the model on each of the neighboring tasks in a sequential manner.",
"Then, we train the multi-task model on the target task and evaluate it on the test set of the target task.",
"The RL-guided multi-task model has an additional RL agent on top of the MTL model to select examples from the neighboring task datasets that would be used to train the shared classifier.",
"Key intuition behind the introduction of the RL agent is that, not all data examples from the neighbor task are equally useful in learning the target task .",
"Architecture of the RL-guided MTL model is shown in figure",
"1. Following the above observation, we employ the agent to identify examples that are useful for the target objective and drop examples that distract the classifier from the target task.",
"The agent is trained using an actor-critic reinforcement paradigm (Konda and Tsitsiklis, 2000).",
"For each example in the neighbor task, the Actor decides whether or not to use it for training the shared classifier.",
"Critic computes the expected reward based on Actor 's actions for a mini-batch.",
"Upon training using the selected examples, we then assign reward to the agent by evaluating the performance of the shared classifier on the target task.",
"If the F 1 scores on the valuation set for b mini-batches, each of size z , are { F 01 , F 11 , . . . , F b 1 } and expected rewards predicted by the critic are { e 0 , e 1 , . . . , e b }, then the policy loss is computed as follows: F i 1 = F i 1 F 1 F 1 + (1) 6707 p = 1 b bi =1 ( F i 1 e i ) 1 z zj =1 log ( P [ a ij ]) (2) v = 1 b bi =1 L 1 -loss (1 , F i 1 ) (3) total loss = policy loss (p) + value loss (v) (4) where is a smoothing constant, a ij is the action decided by the Actor for the j th example of mini-batch i , F 1 and F 1 are mean and standard deviations of the macroF 1 scores, respectively.",
"The algorithm for RL-guided Multitask learning is shown in algorithm",
"1. Input to the RL-MTL model is a set of neighboring task datasets and a target task dataset.",
"Output is trained classifier C .",
"We initialize the parameters of the RL-MTL base classifier with the trained parameters of the MTL model.",
"Later, we evaluate the impact of this initialization via an ablation study in section 7.1.",
"Algorithm 1 RL-Guided MTL Require : Neighbor Datasets { N 0 , N 1 , . . . , N d }, Target Dataset T Parameters : Policy Network P that includes Actor Network A and Critic Network R 1: Select baseline classifier C 2: for episode i = 1 , 2 , . . . , e do 3: for neighbor dataset j = 1 , 2 , . . . , d do 4: for mini-batch k = 1 , 2 , . . . , b do 5: Actor Network A makes binary SELECT / REJECT decision for each example in N jk 6: Critic Network R computes expected reward based on examples selected by Actor A = E [ r ] ijk 7: Train C on the SELECTED mini-batch subset N SELjk 8: Evaluate on Target Dataset T and obtain F 1 on target dataset evaluation set F ijk 1 9: end for 10: Use F ijk 1 s and E [ r ] ijk s to compute loss according to equation 4 11: Update parameters of A and R 12: end for 13: end for 14: return Trained classifier C 5 Experiments We perform experiments on six datasets in three phases.",
"In the first phase, we experiment with PLM-based fine-tuned classifiers for each task as baselines.",
"In the second phase, we experiment with all the tasks using the multi-task learning model described in section 4.1, with each PLM as a base classifier.",
"In the third phase, we train the reinforcement-learning guided multi-task learning framework (section 4.2) for all the tasks with each of the PLMs as base classifier.",
"We select four popular PLMs as base classifiers for our empirical experiments, namely, BERT-base, BERT-large (Devlin et al., 2019), BART-large (Lewis et al., 2020) and XLNet-large (Yang et al., 2019).",
"We use the implementations from Wolf et al. (2020)'s huggingface transformers library 2 for experimentation.",
"We fine-tune a classification layer on top of representations from each of the PLMs as baseline to evaluate our framework.",
"We use six datasets for our empirical evaluation, namely, Jigsaw Toxicity Dataset, Hate Speech Detection (de Gibert et al., 2018), Misogyny Detection (Fersini et al., 2018), Offensive Language Detection (Davidson et al., 2017), coarse-grained Stereotype Detection (combination of Stereoset , CrowS-Pairs and Reddit Data) and finally fine-grained Stereotype Detection Data (as described in section 3).",
"We describe each dataset briefly below.",
"Hate Speech Detection (de Gibert et al., 2018) dataset consists of 10 , 944 data examples of text extracted from Stromfront, a white-supremacist forum.",
"Each piece of text is labeled as either hate speech or not .",
"Misogyny Detection (Fersini et al., 2018) dataset consists of 3 , 251 data examples of text labeled with the binary label of being misogynous or not .",
"Offensive Language Detection (Davidson et al., 2017) dataset was built using crowd-sourced hate lexicon to collect tweets, followed by manual annotation of each example as one of hate-speech , only offensive language or neither .",
"This dataset contains 24 , 783 examples.",
"Coarse-Grained Stereotype Detection : We create this dataset by combining stereotypical examples from Stereoset and CrowS-Pairs datasets to get positive examples, followed by adding negative examples from the subreddit /r/AskHistorians .",
"We 2 https://github.com/huggingface/ transformers 6708 do not use crowd sourced labels in this dataset.",
"We use the labels from the original datasets.",
"The dataset consists of 23 , 900 data examples.",
"Fine-Grained Stereotype Detection : This dataset is the result of our annotation efforts in section",
"3. It consists of 2 , 221 examples, each annotated with one of three possible labels: explicit stereotype, implicit stereotype and non-stereotype .",
"Jigsaw Toxicity Dataset 3 consists of 159 , 571 training examples and 153 , 164 test examples labeled with one or more of the seven labels: toxic, severely toxic, obscene, threat, insult, identity hate, none .",
"We use this data only for training.",
"We don't evaluate performance on this dataset.",
"We present the results of the empirical evaluation tasks in table",
"3. In Hate Speech Detection task, we observe that RL-MTL learning results in significant improvements over all the baseline classifiers.",
"Plain MTL model also improves upon the baseline classifiers except in the case on BART-large.",
"The best model for this task is BERT-base + RL-MTL which achieves a macro-F1 score of 72 .",
"06 compared to 68 .",
"91 obtained by the best baseline classifier.",
"Best MTL model obtains 69 .",
"78 F1.",
"For Hate Speech and Offensive Language Detection task, the respective numbers for baseline, MTL and RL-MTL models are 66 .",
"13 , 68 .",
"57 and 68 .",
"97 .",
"The models achieve 74 .",
"16 , 74 .",
"40 and 75 .",
"21 on Misogyny Detection task, respectively.",
"In Coarse-Grained Stereotype Detection task, they achieve 65 .",
"71 , 68 .",
"29 & 74 .",
"18 , which is a significant gradation over each previous class of models.",
"On our focus evaluation set of Fine-Grained Stereotype Detection , we achieve 61 .",
"36 , 65 .",
"00 & 67 .",
"94 in each class of models.",
"The results on this dataset are obtained in a zero-shot setting as we only use this dataset for evaluation.",
"In the first ablation study described in subsection 7.1, we study the importance of initializing RL-MTL model with the trained parameters of MTL model.",
"Following that, we look into more detail about the usefulness of neighbor tasks on the target task via an ablation study.We describe these experiments in further detail in subsection 7.2.",
"In our original experiments, we initialize the parameters of RL-MTL model with trained parameters from the MTL model.",
"This allows the RL agent to begin from a well-optimized point in the parameter sample space.",
"In this ablation study, we initialize the RL-MTL model from scratch to see how it impacts the performance of the RL-MTL model.",
"We perform this experiment with BERT-base as base classifier.",
"The performance of the RLMTL model without initialization drops to 70 .",
"23 on HS task, 67 .",
"23 on HSO task, 71 .",
"10 on MG task, 60 .",
"42 on CG-ST task and 57 .",
"32 on FG-ST task.",
"The respective numbers for the MTL initialized model are 72 .",
"06 , 68 .",
"97 , 74 .",
"78 , 74 .",
"18 and 65 .",
"72 .",
"Initialization has biggest impact on the Coarseand Fine-Grained Stereotype Detection tasks.",
"Overall, initialization with MTL trained parameters results in a better convergence point for the RL-MTL model.",
"In this task, we aim to study the neighbor tasks that are most useful for each target task.",
"For each dataset, we train RL-MTL framework with only one other neighbor dataset.",
"We see which task yields biggest improvement for each target task.",
"We experiment with various combinations of datasets for this dataset.",
"Results for this ablation study are shown in table",
"4. All experiments in this ablation study are performed using BERT-base as the base classifier.",
"Results in table 4 show that for both Hate Speech Detection (HS) and Hate Speech and Offensive Language Detection (HSO) tasks, Coarse-Grained Stereotype Detection (C-ST) neighboring task yields the best improvements to 71 .",
"1 and 67 .",
"39 macro-F1, respectively.",
"All the other three neighboring tasks are useful in improving the performance of the base classifier from 66 .",
"47 and 66 .",
"13 F1 scored.",
"For Misogyny Detection (MG) task, HSO neighboring task results in an improvement from 74 .",
"16 to 75 .",
"87 , while the other two tasks deteriorate the performance on the task.",
"It is also interesting to note that, the combined performance on the task with all three datasets is lower ( 74 . 78 ) than when using HSO data alone.",
"For both Coarseand Fine-grained Stereotype Detection (F-ST) tasks, HS and HSO datasets improve the performance over the baseline, while MG deteriorates the performance.",
"The combined improvement of all the neighboring tasks together is higher than either HS 6709 Model Hate Speech Detection Offense Detection Misogyny Detection Coarse Stereotypes Fine Stereotypes BERT-base 66 .",
"It is also interesting to note that the C-ST task doesn't contribute significantly to performance improvement on F-ST task.",
"This might be due to the presence of anti-stereotypes and several other issues pointed out in Blodgett et al. (2021b).",
"We tackle the problem of Stereotype Detection from data annotation and low-resource computational framework perspectives in this paper.",
"First, we discuss the key challenges that make the task unique and a low-resource one.",
"Then, we devise a focused annotation task in conjunction with selected data candidate collection to create a fine-grained evaluation set for the task.",
"Further, we utilize several neighboring tasks that are correlated with our target task of 'Stereotype Detection' , with an abundance of high-quality gold data.",
"We propose a reinforcement learning-guided multitask learning framework that learns to select relevant examples from the neighboring tasks that improve performance on the target task.",
"Finally, we perform exhaustive empirical experiments to showcase the effectiveness of the framework and delve into various details of the learning process via several ablation studies.",
"We thank the anonymous reviewers and meta-reviewer for their insightful comments that helped in improving our paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"method",
"method",
"method",
"method",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"objective",
"method",
"other"
] |
[
"Natural language is compositional; the meaning of a sentence is a function of the meaning of its parts.",
"This property allows humans to create and interpret novel sentences, generalizing robustly outside their prior experience.",
"Neural networks have been shown to struggle with this kind of generalization, in particular performing poorly on tasks designed to assess compositional generalization (i.e. where training and testing distributions differ in ways that would be trivial for a compositional strategy to resolve).",
"Their poor performance on these tasks may in part be due to the nature of supervised learning which assumes training and testing data to be drawn from the same distribution.",
"We implement a meta-learning augmented version of supervised learning whose objective directly optimizes for out-of-distribution generalization.",
"We construct pairs of tasks for meta-learning by sub-sampling existing training data.",
"Each pair of tasks is constructed to contain relevant examples, as determined by a similarity metric, in an effort to inhibit models from memorizing their input.",
"Experimental results on the COGS and SCAN datasets show that our similarity-driven meta-learning can improve generalization performance.",
"Compositionality is the property of human language that allows for the meaning of a sentence to be constructed from the meaning of its parts and the way in which they are combined (Cann, 1993).",
"By decomposing phrases into known parts we can generalize to novel sentences despite never having encountered them before.",
"In practice this allows us to produce and interpret a functionally limitless number of sentences given finite means (Chomsky, 1965).",
"Whether or not neural networks can generalize in this way remains unanswered.",
"Prior work asserts that there exist fundamental differences between cognitive and connectionist architectures that makes compositional generalization by the latter unlikely (Fodor and Pylyshyn, 1988).",
"However, recent work has shown these models' capacity for learning some syntactic properties.",
"Hupkes et al. (2018) show how some architectures can handle hierarchy in an algebraic context and generalize in a limited way to unseen depths and lengths.",
"Work looking at the latent representations learned by deep machine translation systems show how these models seem to extract constituency and syntactic class information from data (Blevins et al., 2018; Belinkov et al., 2018).",
"These results, and the more general fact that neural models perform a variety of NLP tasks with high fidelity (eg. Vaswani et al., 2017; Dong and Lapata, 2016), suggest these models have some sensitivity to syntactic structure and by extension may be able to learn to generalize compositionally.",
"Recently there have been a number of datasets designed to more formally assess connectionist models' aptitude for compositional generalization (Kim and Linzen, 2020; Lake and Baroni, 2018; Hupkes et al., 2019).",
"These datasets frame the problem of compositional generalization as one of out-of-distribution generalization: the model is trained on one distribution and tested on another which differs in ways that would be trivial for a compositional strategy to resolve.",
"A variety of neural network architectures have shown mixed performance across these tasks, failing to show conclusively that connectionist models are reliably capable of generalizing compositionally (Keysers et al., 2020; Lake and Baroni, 2018).",
"Natural language requires a mixture of memorization and generalization (Jiang et al., 2020), memorizing exceptions and atomic concepts with which to generalize.",
"Previous work looking at compositional generalization has suggested that models may memorize large spans of sentences multiple words in length (Hupkes et al., 2019; Keysers et al., 2020).",
"This practice may not harm in-domain performance, but if at test time the model encounters a sequence of words it has not encountered before it will be unable to interpret it having not learned the atoms (words) that comprise it.",
"Griffiths (2020) looks at the role of limitations in the development of human cognitive mechanisms.",
"Humans' finite computational ability and limited memory may be central to the emergence of robust generalization strategies like compositionality.",
"A hard upper-bound on the amount we can memorize may be in part what forces us to generalize as we do.",
"Without the same restriction models may prefer a strategy that memorizes large sections of the input potentially inhibiting their ability to compositionally generalize.",
"In a way the difficulty of these models to generalize out of distribution is unsurprising: supervised learning assumes that training and testing data are drawn from the same distribution, and therefore does not necessarily favour strategies that are robust out of distribution.",
"Data necessarily under-specifies for the generalizations that produced it.",
"Accordingly for a given dataset there may be a large number of generalization strategies that are compatible with the data, only some of which will perform well outside of training (D'Amour et al., 2020).",
"It seems connectionist models do not reliably extract the strategies from their training data that generalize well outside of the training distribution.",
"Here we focus on an approach that tries to to introduce a bias during training such that the model arrives at a more robust strategy.",
"To do this we implement a variant of the model agnostic meta-learning algorithm (MAML, Finn et al., 2017a).",
"The approach used here follows Wang et al. (2020a) which implements an objective function that explicitly optimizes for out-of-distribution generalization in line with Li et al. (2018).",
"Wang et al. (2020a) creates pairs of tasks for each batch (which here we call meta-train and meta-test) by sub-sampling the existing training data.",
"Each meta-train, meta-test task pair is designed to simulate the divergence between training and testing: meta-train is designed to resemble the training distribution, and meta-test to resemble the test distribution.",
"The training objective then requires that update steps taken on meta-train are also beneficial for meta-test.",
"This serves as a kind of regularizer, inhibiting the model from taking update steps that only benefit meta-train.",
"By manipulating the composition of meta-test we can control the nature of the regularization applied.",
"Unlike other meta-learning methods this is not used for few or zero-shot performance.",
"Instead it acts as a kind of meta-augmented supervised learning, that helps the model to generalize robustly outside of its training distribution.",
"The approach taken by Wang et al. (2020a) relies on the knowledge of the test setting.",
"While it does not assume access to the test distribution, it assumes access to the family of test distributions, from which the actual test distribution will be drawn.",
"While substantially less restrictive than the standard iid setting, it still poses a problem if we do not know the test distribution, or if the model is evaluated in a way that does not lend itself to being represented by discrete pairs of tasks (i.e. if test and train differ in a variety of distinct ways).",
"Here we propose a more general approach that aims to generate meta-train, meta-test pairs which are populated with similar (rather than divergent) examples in an effort to inhibit the model from memorizing its input.",
"Similarity is determined by a string or tree kernel so that for each meta-train task a corresponding meta-test task is created from examples deemed similar.",
"By selecting for similar examples we design the meta-test task to include examples with many of the same words as meta-train, but in novel combinations.",
"As our training objective encourages gradient steps that are beneficial for both tasks we expect the model to be less likely to memorize large chunks which are unlikely to occur in both tasks, and therefore generalize more compositionally.",
"This generalizes the approach from Wang et al. (2020a), by using the meta-test task to apply a bias not-strictly related to the test distribution: the design of the meta-test task allows us to design the bias which it applies.",
"It is worth noting that other recent approaches to this problem have leveraged data augmentation to make the training distribution more representative of the test distribution (Andreas, 2020).",
"We believe this line of work is orthogonal to ours as it does not focus on getting a model to generalize compositionally, but rather making the task simple enough that compositional generalization is not needed.",
"Our method is model agnostic, and does not require prior knowledge of the target distribution.",
"We summarise our contributions as follows: We approach the problem of compositional generalization with a meta-learning objective that tries to explicitly reduce input memorization using similarity-driven virtual tasks.",
"We perform experiments on two text-to-semantic compositional datasets: COGS and SCAN.",
"Our new training objectives lead to significant improvements in accuracy over a baseline parser trained with conventional supervised learning.",
"1 2 Methods We introduce the meta-learning augmented approach to supervised learning from Li et al. (2018); Wang et al. (2020a) that explicitly optimizes for out-of-distribution generalization.",
"Central to this approach is the generation of tasks for meta-learning by sub-sampling training data.",
"We introduce three kinds of similarity metrics used to guide the construction of these tasks.",
"Compositional Generalization Lake and Baroni (eg. 2018); Kim and Linzen (eg. 2020) introduce datasets designed to assess compositional generalization.",
"These datasets are created by generating synthetic data with different distributions for testing and training.",
"The differences between the distributions are trivially resolved by a compositional strategy.",
"At their core these tasks tend to assess three key components of compositional ability: systematicity, productivity, and primitive application.",
"Systematicity allows for the use of known parts in novel combinations as in",
"(a).",
"Productivity enables generalization to longer sequences than those seen in training as in",
"(b).",
"Primitive application allows for a word only seen in isolation during training to be applied compositionally at test time as in",
"(c).",
"(a) The cat gives the dog a gift The dog gives the cat a gift",
"(b) The cat gives the dog a gift The cat gives the dog a gift and the bird a gift",
"A compositional grammar like the one that generated the data would be able to resolve these three kinds of generalization easily, and therefore performance on these tasks is taken as an indication of a model's compositional ability.",
"Conventional Supervised Learning The compositional generalization datasets we look at are semantic parsing tasks, mapping between natural language and a formal representation.",
"A usual supervised learning objective for semantic parsing is to minimize the negative log-likelihood of the correct formal representation given a natural language input sentence, i.e. minimising LB ( ) = 1 NN (cid:88) i =1 log p ( y | x ) (1) where N is the size of batch B , y is a formal representation and x is a natural language sentence.",
"This approach assumes that the training and testing data are independent and identically distributed.",
"Task Distributions Following from Wang et al. (2020a), we utilize a learning algorithm that can enable a parser to benefit from a distribution of virtual tasks, denoted by p ( ) , where refers to an instance of a virtual compositional generalization task that has its own training and test examples.",
"Once we have constructed our pairs of virtual tasks we need a training algorithm that encourages",
"compositional generalization in each.",
"Like Wang et al. (2020a), we turn to optimization-based meta-learning algorithms (Finn et al., 2017b; Li et al., 2018) and apply DG-MAML (Domain Generalization with Model-Agnostic Meta-Learning), a variant of MAML (Finn et al., 2017b).",
"Intuitively, DG-MAML encourages optimization on meta-training examples to have a positive effect on the meta-test examples as well.",
"During each learning episode of MAML training we randomly sample a task which consists of a training batch B t and a generalization batch B g and conduct optimization in two steps, namely meta-train and meta-test .",
"Meta-Test The fine-tuned parameters (cid:48) are evaluated on the accompanying generalization task, meta-test, by computing their loss on it denoted as LB g ( (cid:48) ) .",
"The final objective for a task is then to jointly optimize the following: L ( ) = LB t ( ) + LB g ( (cid:48) ) = LB t ( ) + LB g ( L ( )) (3) The objective now becomes to reduce the joint loss of both the meta-train and meta-test tasks.",
"Optimizing in this way ensures that updates on meta-train are also beneficial to meta-test.",
"The loss on meta-test acts as a constraint on the loss from meta-train.",
"This is unlike traditional supervised learning ( L ( ) = LB t ( ) + LB g ( ) ) where the loss on one batch does not constrain the loss on another.",
"With a random B t and B g , the joint loss function can be seen as a kind of generic regularizer, ensuring that update steps are not overly beneficial to meta-train alone.",
"By constructing B t and B g in ways which we expect to be relevant to compositionality, we aim to allow the MAML algorithm to apply specialized regularization during training.",
"Here we design meta-test to be similar to the meta-train task because we believe this highlights the systematicity generalization that is key to compositional ability: selecting for examples comprised of the same atoms but in different arrangements.",
"In constraining each update step with respect to meta-train by performance on similar examples Source Example : The girl changed a sandwich beside the table .",
"in meta-test we expect the model to dis-prefer a strategy that does not also work for meta-test like memorization of whole phrases or large sections of the input.",
"Ideally, the design of virtual tasks should reflect specific generalization cases for each dataset.",
"However, in practice this requires some prior knowledge of the distribution to which the model will be expected to generalize, which is not always available.",
"Instead we aim to naively structure the virtual tasks to resemble each other.",
"To do this we use a number of similarity measures intended to help select examples which highlight the systematicity of natural language.",
"Inspired by kernel density estimation (Parzen, 1962), we define a relevance distribution for each example: p ( x (cid:48) , y (cid:48) | x, y ) exp (cid:0) k ([ x, y ] , [ x (cid:48) , y (cid:48) ] / (cid:1) (4) where k is the similarity function, [ x, y ] is a training example, is a temperature that controls the sharpness of the distribution.",
"Based on our extended interpretation of relevance, a high p implies that [ x, y ] is systematically relevant to [ x (cid:48) , y (cid:48) ] containing many of the same atoms but in a novel combination.",
"We look at three similarity metrics to guide subsampling existing training data into meta-test tasks proportional to each example's p .",
"Levenshtein Distance First, we consider Levenshtein distance, a kind of edit distance widely used to measure the dissimilarity between strings.",
"We compute the negative Levenshtein distance at the word-level between natural language sentences of two examples: k ([ x, y ] , [ x (cid:48) , y (cid:48) ]) = 1 LevDistance ( x, x (cid:48) ) (5) where LevDistance returns the number of edit operations required to transform x into x (cid:48) .",
"See Table 1 for examples.",
"Another family of similarity metrics for discrete structures are convolution kernels (Haussler, 1999).",
"Tree-Kernel Similarity In semantic parsing, the formal representation y usually has a known grammar which can be used to represent it as a tree structure.",
"In light of this we use tree convolution kernels to compute similarity between examples: 3 k ([ x, y ] , [ x (cid:48) , y (cid:48) ]) = TreeKernel ( y, y (cid:48) ) (7) where the TreeKernel function is a convolution kernel (Collins and Duffy, 2001) applied to trees.",
"Here we consider a particular case where y is represented as a dependency structure, as shown in Figure 1.",
"We use the partial tree kernel (Moschitti, 2006) which is designed for application to dependency trees.",
"For a given dependency tree partial tree kernels generate a series of all possible partial trees: any set of one or more connected nodes.",
"Given two trees the kernel returns the number of partial trees they have in common, interpreted as a similarity score.",
"Compared with string-based similarity, this kernel prefers sentences that share common syntactic sub-structures, some of which are not assigned high scores in string-based similarity metrics, as shown in Table 1.",
"Though tree-structured formal representations are more informative in obtaining relevance, not all logical forms can be represented as tree structures.",
"In SCAN (Lake and Baroni, 2018) y are action sequences without given grammars.",
"As we will show in the experiments, string-based similarity metrics have a broader scope of applications but are less effective than tree kernels in cases where y can be tree-structured.",
"Sampling for Meta-Test Using our kernels we compute the relevance distribution in Eq 4 to construct virtual tasks for MAML training.",
"We show the resulting procedure in Algorithm 1.",
"In order to construct a virtual task , a meta-train batch is first sampled at random from the training data (line 2), then the accompanying meta-test batch is created by sampling examples similar to those in meta-train (line 5).",
"We use Lev-MAML, Str-MAML and Tree-MAML to denote the meta-training using Levenshtein distance, string-kernel and tree-kernel similarity, respectively.",
"SCAN contains a set of natural language commands and their corresponding action sequences (Lake and Baroni, 2018).",
"We use the Maximum Compound Divergence (MCD) splits (Key-sers et al., 2020), which are created based on the principle of maximizing the divergence between the compound (e.g., patterns of 2 or more action sequences) distributions of the training and test tests.",
"We apply Lev-MAML and Str-MAML to SCAN where similarity measures are applied to the natural language commands.",
"Tree-MAML (which uses a tree kernel) is not applied as the action sequences do not have an underlying dependency tree-structure.",
"COGS contains a diverse set of natural language sentences paired with logical forms based on lambda calculus (Kim and Linzen, 2020).",
"Compared with SCAN, it covers various systematic linguistic abstractions (e.g., passive to active) including examples of lexical and structural generalization, and thus better reflects the compositionality of natural language.",
"In addition to the standard splits of Train/Dev/Test, COGS provides a generalization (Gen) set drawn from a different distribution that specifically assesses compositional generalization.",
"We apply Lev-MAML, Str-MAML and Tree-MAML to COGS; Lev-MAML and Str-MAML make use of the natural language sentences while Tree-MAML uses the dependency structures reconstructed from the logical forms.",
"In general, our method is model-agnostic and can be coupled with any semantic parser to improve its compositional generalization.",
"Additionally Lev-MAML, and Str-MAML are dataset agnostic provided the dataset has a natural language input.",
"In this work, we apply our methods on two widely used sequence-to-sequences models.",
"4 LSTM-based Seq2Seq has been the backbone of many neural semantic parsers (Dong and La-pata, 2016; Jia and Liang, 2016).",
"It utilizes 4 Details of implementations and hyperparameters can be found in the Appendix.",
"Transformer-based Seq2Seq also follows the encoder-decoder framework, but it uses Transformers (Vaswani et al., 2017) to replace the LSTM for encoding and decoding.",
"It has proved successful in many NLP tasks e.g., machine translation.",
"Recently, it has been adapted for semantic parsing (Wang et al., 2020b) with superior performance.",
"We try to see whether our MAML training can improve the compositional generalization of contemporary semantic parsers, compared with standard supervised learning.",
"Moreover, we include a meta-baseline, referred to as Uni-MAML, that constructs meta-train and meta-test splits by uniformly sampling training examples.",
"By comparing with this meta-baseline, we show the effect of similarity-driven construction of meta-learning splits.",
"Note that we do not focus on making comparisons with other methods that feature specialized architectures for SCAN datasets (see Section 5), as these methods do not generalize well to more complex datasets (Furrer et al., 2020).",
"GECA We additionally apply the good enough compositional augmentation (GECA) method laid out in Andreas (2020) to the SCAN MCD splits.",
"Data augmentation of this kind tries to make the training distribution more representative of the test distribution.",
"This approach is distinct from ours which focuses on the training objective, but the two can be combined with better overall performance as we will show.",
"Specifically, we show the results of GECA applied to the MCD splits as well as GECA combined with our Lev-MAML variant.",
"Note that we elect not to apply GECA to COGS, as the time and space complexity 5 of GECA proves very costly for COGS in our preliminary experiments.",
"The similarity-driven sampling distribution p in Eq 4 requires computing the similarity between every pair of training examples, which can be very expensive depending on the size of of the dataset.",
"As the sampling distributions are fixed during training, we compute and cache them beforehand.",
"However, they take an excess of disk space to store as essentially we need to store an N N matrix where N 5 See the original paper for details.",
"is the number of training examples.",
"To allow efficient storage and sampling, we use the following approximation.",
"First, we found that usually each example only has a small set of neighbours that are relevant to it.",
"6 Motivated by this observation, we only store the top 1000 relevant neighbours for each example sorted by similarity, and use it to construct the sampling distribution denoted as p top 1000 .",
"To allow examples out of top 1000 being sampled, we use a linear interpolation between p top 1000 and a uniform distribution.",
"Specifically, we end up using the following sampling distribution: p ( x (cid:48) , y (cid:48) | x, y ) = p top 1000 ( x (cid:48) , y (cid:48) | x, y )+(1 ) 1 N where p top 1000 assigns 0 probability to out-of top 1000 examples, N is the number of training examples, and is a hyperparameter for interpolation.",
"In practice, we set to 0 .",
"5 in all experiments.",
"To sample from this distribution, we first decide whether the sample is in the top 1000 by sampling from a Bernoulli distribution parameterized by .",
"If it is, we use p top1000 to do the sampling; otherwise, we uniformly sample an example from the training set.",
"Many tasks that assess out-of-distribution (O.O.D.) generalization (e.g. COGS) do not have an O.O.D.",
"6 For example, in COGS, each example only retrieves 3.6% of the whole training set as its neighbours (i.e., have non-zero tree-kernel similarity) on average.",
"Dev set that is representative of the generalization distribution.",
"This is desirable as a parser in principle should never have knowledge of the Gen set during training.",
"In practice though the lack of an O.O.D. Dev set makes model selection extremely difficult and not reproducible.",
"7 In this work, we propose the following strategy to alleviate this issue: 1) we sample a small subset from the Gen set, denoted as Gen Dev' for tuning meta-learning hy-perparmeters, 2) we use two disjoint sets of random seeds for development and testing respectively, i.e., retraining the selected models from scratch before applying them to the final test set.",
"In this way, we make sure that our tuning is not exploiting the models resulting from specific random seeds: we do not perform random seed tuning.",
"At no point are any of our models trained on the Gen Dev set.",
"On SCAN, as shown in Table 2, Lev-MAML substantially helps both base parsers achieve better performance across three different splits constructed according to the MCD principle.",
"8 Though our models do not utilize pre-training such as T5 (Raf-fel et al., 2019), our best model (Lev-MAML + LSTM) still outperforms T5 based models sig-nificantly in MCD1 and MCD2.",
"We show that GECA is also effective for MCD splits (especially 7 We elaborate on this issue in the Appendix. 8 Our base parsers also perform much better than previous methods, likely due to the choice of hyperparameters. in MCD1).",
"More importantly, augmenting GECA with Lev-MAML further boosts the performance substantially in MCD1 and MCD2, signifying that our MAML training is complementary to GECA to some degree.",
"Table 3 shows our results on COGS.",
"Tree-MAML boosts the performance of both LSTM and Transformer base parsers by a large margin: 6.5% and 8.1% respectively in average accuracy.",
"Moreover, Tree-MAML is consistently better than other MAML variants, showing the effectiveness of exploiting tree structures of formal representation to construct virtual tasks.",
"9 4 Discussion 4.1 SCAN Discussion The application of our string-similarity driven meta-learning approaches to the SCAN dataset improved the performance of the LSTM baseline parser.",
"Our results are reported on three splits of the dataset generated according to the maximum compound divergence (MCD) principle.",
"We report results on the only MCD tasks for SCAN as these tasks explicitly focus on the systematicity of language.",
"As such they assess a model's ability to extract sufficiently atomic concepts from its input, such that it can still recognize those concepts in a new context (i.e. as part of a different compound).",
"To succeed here a model must learn atoms from the training data and apply them compositionally at test time.",
"The improvement in performance our approach achieves on this task suggests that it does disincentivise the model from memorizing large sections or entire compounds from its input.",
"GECA applied to the SCAN MCD splits does improve performance of the baseline, however not to the same extent as when applied to other SCAN tasks in Andreas (2020).",
"GECA's improvement is comparable to our meta-learning method, despite the fact that our method does not leverage any data augmentation.",
"This means that our method achieves high performance by generalizing robustly outside of its training distribution, rather than by making its training data more representative of the test distribution.",
"The application of our Lev-MAML approach to GECA-augmented data results in further improvements in performance, suggest-9 The improvement of all of our MAML variants applied to the Transformer are significant (p < 0.03) compared to the baseline, of our methods applied to LSTMs, Tree-MAML is significant (p < 0.01) compared to the baseline.",
"ing that these approaches aid the model in distinct yet complementary ways.",
"All variants of our meta-learning approach improved both the LSTM and Transformer baseline parsers' performance on the COGS dataset.",
"The Tree-MAML method outperforms the Lev-MAML, Str-MAML, and Uni-MAML versions.",
"The only difference between these methods is the similarity metric used, and so differences in performance must be driven by what each metric selects for.",
"For further analysis of the metrics refer to the appendix.",
"The strong performance of the Uni-MAML variant highlights the usefulness of our approach generally in improving models' generalization performance.",
"Even without a specially designed meta-test task this approach substantially improves on the baseline Transformer model.",
"We see this as evidence that this kind of meta-augmented supervised learning acts as a robust regularizer particularly for tasks requiring out of distribution generalization.",
"Although the Uni-MAML, Lev-MAML, and Str-MAML versions perform similarly overall on the COGS dataset they may select for different generalization strategies.",
"The COGS generalization set is comprised of 21 sub-tasks which can be used to better understand the ways in which a model is generalizing (refer to Table 4 for examples of subtask performance).",
"Despite having very similar overall performance Uni-MAML and Str-MAML perform distinctly on individual COGS tasks with their performance appearing to diverge on a number of of them.",
"This would suggest that the design of the meta-test task may have a substantive impact on the kind of generalization strategy that emerges in the model.",
"For further analysis of COGS sub-task performance see the appendix.",
"Our approaches' strong results on both of these datasets suggest that it aids compositional generalization generally.",
"However it is worth nothing that both datasets shown here are synthetic, and although COGS endeavours to be similar to natural data, the application of our methods outside of synthetic datasets is important future work.",
"Compositional Generalization A large body of work on compositional generalization provide models with strong compositional bias, such as specialized neural architectures (Li et al., 2019; Russin",
"et al., 2019; Gordon et al., 2019), or grammar-based models that accommodate alignments between natural language utterances and programs (Shaw et al., 2020; Herzig and Berant, 2020).",
"Another line of work utilizes data augmentation via fixed rules (Andreas, 2020) or a learned network (Akyrek et al., 2020) in an effort to transform the out-of-distribution compositional generalization task into an in-distribution one.",
"Our work follows an orthogonal direction, injecting compositional bias using a specialized training algorithm.",
"A related area of research looks at the emergence of compositional languages, often showing that languages which seem to lack natural-language like compositional structure may still be able to generalize to novel concepts (Kottur et al., 2017; Chaabouni et al., 2020).",
"This may help to explain the ways in which models can generalize robustly on in-distribution data unseen during training while still struggling on tasks specifically targeting compositionality.",
"Meta-Learning for NLP Meta-learning methods (Vinyals et al., 2016; Ravi and Larochelle, 2016; Finn et al., 2017b) that are widely used for few-shot learning, have been adapted for NLP applications like machine translation (Gu et al., 2018) and relation classification (Obamuyide and Vla-chos, 2019).",
"In this work, we extend the conventional MAML (Finn et al., 2017b) algorithm, which was initially proposed for few-shot learning, as a tool to inject inductive bias, inspired by Li et al. (2018); Wang et al. (2020a).",
"For compositional generalization, Lake (2019) proposes a meta-learning procedure to train a memory-augmented neural model.",
"However, its meta-learning algorithm is specialized for the SCAN dataset (Lake and Baroni, 2018) and not suitable to more realistic datasets.",
"Our work highlights the importance of training objectives that select for robust generalization strategies.",
"The meta-learning augmented approach to supervised learning used here allows for the speci-fication of different constraints on learning through the design of the meta-tasks.",
"Our similarity-driven task design improved on baseline performance on two different compositional generalization datasets, by inhibiting the model's ability to memorize large sections of its input.",
"Importantly though the overall approach used here is model agnostic, with portions of it (Str-MAML, Lev-MAML, and Uni-MAML) proving dataset agnostic as well requiring only that the input be a natural language sentence.",
"Our methods are simple to implement compared with other approaches to improving compositional generalization, and we look forward to their use in combination with other techniques to further improve models' compositional ability.",
"This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences.",
"We also acknowledge the fi-nancial support of the European Research Council (Titov, ERC StG BroadSem 678254) and the Dutch National Science Foundation (Titov, NWO VIDI 639.022.518)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"abstain",
"result",
"abstain",
"result",
"other",
"other"
] |
[
"We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft 1 .",
"The dataset consists of 7K human utterances and their corresponding parses.",
"Given proper world state, the parses can be interpreted and executed in game.",
"We report the performance of baseline models, and analyze their successes and failures.",
"Semantic parsing is used as a component for natural language understanding in human-robot interaction systems (Lauria et al., 2001; Bos and Oka, 2007; Tellex et al., 2011; Matuszek et al., 2013; Thomason et al., 2019), and for virtual assistants (Cam-pagna et al., 2017; Kollar et al., 2018; Campagna et al., 2019).",
"We would like to be able to apply deep learning methods in this space, as recently researchers have shown success with these methods for semantic parsing more generally, e.g. (Dong and Lapata, 2016; Jia and Liang, 2016; Zhong et al., 2017).",
"However, to fully utilize powerful neural network approaches, it is necessary to have large numbers of training examples.",
"In the space of human-robot (or human-assistant) interaction, the publicly available semantic parsing datasets are small.",
"Furthermore, it can be difficult to reproduce the end-to-end results (from utterance to action in the environment) because of the wide variety of robot setups and proprietary nature of personal assistants.",
"tion game Minecraft 2 , a popular multiplayer open-world voxel-based crafting game.",
"We also provide the associated platform for executing the logical forms in game.",
"Situating the assistant in Minecraft has several benefits for studying task oriented natural language understanding (NLU).",
"Compared to physical robots, Minecraft allows less technical overhead irrelevant to NLU, such as difficulties with hardware and large scale data collection.",
"On the other hand, our bot has all the basic in-game capabilities of a player, including movement and placing or removing voxels.",
"Thus Minecraft preserves many of the NLU elements of physical robots, such as discussions of navigation and spatial object reference.",
"Working in Minecraft may enable large scale human interaction because of its large player base, in the tens of millions.",
"Furthermore, although Minecraft's simulation of physics is simplified, the task space is complex.",
"While there are many atomic objects in the game, such as animals and block-types, that require no perceptual modeling, the player also interacts with complex structures made up of collections of voxels such as a house or a hill.",
"The assistant cannot apprehend them without a perceptual system, creating an ideal test bed for researchers interested in the interactions between perception and language.",
"Our contributions in the paper are as follows: Grammar: We develop a grammar over a set of primitives that comprise a mid-level interface to Minecraft for machine learning agents.",
"Data: We collect 7K crowd-sourced annotations of commands generated independent of our grammar.",
"In addition to the natural language commands and the associated logical forms, we release the tools used to collect these, which allow 2 https://minecraft.net/en-us/ .",
"Models: We show the results of several neural semantic parsing models trained on our data.",
"Execution: Finally, we also make available the code to execute logical forms in the game, allowing the reproduction of end-to-end results.",
"This also opens the door to using the data for reinforcement and imitation learning with language.",
"We also provide access to an interactive bot using these models for parsing 3 .",
"In this section we summarize a grammar for generating logical forms that can be interpreted into programs for the agent architecture described in (Gray et al., 2019).",
"The assistant's basic functions include moving, and placing and destroying blocks.",
"Supporting these basic functions are methods for control flow and memory manipulation.",
"Basic action commands: The assistant can MOVE to a specified location; or DANCE with a specified sequence of steps.",
"It can BUILD an object from a known schematic (or by making a copy of a block-object in the world) at a given location, or DESTROY an existing object.",
"It can DIG a hole of a given shape at a specified location, or FILL one up.",
"The agent can also be asked to complete a partially built structure however it sees fit by FREEBUILD .",
"Control commands: Additionally, the agent can STOP or RESUME an action, or UNDO the result of a recent command.",
"Furthermore, the assistant can LOOP given a task and a stop-condition.",
"Finally, it needs to be able to understand when a sentence does not correspond to any of the above mentioned actions, and map it to a NOOP .",
"Memory interface: Finally, the assistant can interact with its SQL based memory.",
"It can place or update rows or cells, for example for tagging objects.",
"This can be considered a basic version of the self-improvement capabilities in (Kollar et al., 2013; Thomason et al., 2015; Wang et al., 2016, 2017).",
"It can retrieve information for question answering similar to the VQA in (Yi et al., 2018).",
"The focus of this paper is an intermediate representation that allows natural language to be interpreted into programs over the basic actions from the previous section.",
"The logical forms (represented as trees) making up this representation consist of three basic types of nodes: internal nodes that can have children, categorical (leaf) nodes that belong to a fixed set of possibilities, and span nodes that point to a region of text in the natural language utterance.",
"The full grammar is shown in the Appendix C; and a partial schematic representation is shown in Figures 1 and 2.",
"In the paragraphs below, we give more detail about some of the kinds of nodes in the grammar.",
"We emphasize that this is an intermediate representation.",
"The logical forms do not come with any mechanism for generating language, and nodes do not correspond in any simple way with words.",
"On the other hand, the logical forms do not encode all of the information necessary for execution without the use of an interpreter that can access the assistant's memory and the Minecraft world state.",
"Internal nodes: Internal nodes are nodes that allow recursion; although most do not require it.",
"They can correspond to top-level actions, for example BUILD ; in which case they would just be an action node with action type build; see Figure 1.",
"They can also correspond to arguments to top-level actions, for example a reference object, which specifies an object that has a spatial location.",
"Internal nodes are not generally required to have children; it is the job of the interpreter to deal with under-specified programs like a BUILD with no arguments.",
"In addition to the various LOCATION , REFERENCE OBJECT , SCHEMATIC , and REPEAT nodes which can be found at various levels, another notable sub-tree is the action's STOP CONDITION , which essentially allows the agent to understand while loops (for example: dig down until you hit the bedrock or follow me).",
"Leaf nodes: Eventually, arguments have to be specified in terms of values which correspond to (fixed) agent primitives.",
"We call these nodes categorical leaves (green rectangles in Figures 1 and 2).",
"As mentioned above, an action internal node has a categorical leaf child which specifies the action type .",
"There are also repeat type nodes similarly specifying a kind of loop for example in the REPEAT sub-tree corresponding to make three houses the repeat type for specifies a for loop).",
"There are also location type nodes specifying if a location is determined by a reference object, a set of coordinates,",
"etc.; relative direction nodes that have values like left or right.",
"The complete list of categorical nodes is given in the Appendix C. However, there are limits to what we can represent with a pre-specified set of hard-coded primitives, especially if we want our agent to be able to learn new concepts or new values.",
"Additionally, even when there is a pre-specified agent primitive, mapping some parts of the command to a specific value might be better left to an external module (e.g. mapping a number string to an integer value).",
"For these reasons, we also have span leaves (red ovals in Figure 2).",
"For example, in the parse for the command Make three oak wood houses to the left of the dark grey church. , the SCHEMATIC (an internal node) might be specified by the command sub-string corresponding to its name by the spanhouses and the requested block type by the span oak wood.",
"The range of the for loop is specified by the REPEAT 's for value (three), and the REFERENCE OBJECT for the location is denoted in the command by its generic name and specific color with spans church and dark grey.",
"corre-Figure 4: Frequency of each action type in the different data collection schemes described in Section 3.1.",
"sponding to writing to memory and reading from memory; and HUMAN GIVE COMMAND which also produces an ACTION SEQUENCE , which is a special internal node whose children are ordered; multiple children correspond to an ordered sequence of commands (build a house and then a tower).",
"In Figures 1 and 2 we show a schematic representation for an ACTION SEQUENCE .",
"This paper introduces the CraftAssist Instruction Parsing (CAIP) dataset of English-language commands and their associated logical forms (see Appendix D for examples and Appendix C for a full grammar specification).",
"We collected natural language commands written by crowd-sourced workers in a variety of settings.",
"The complete list of instructions given to crowd-workers in different settings, as well as step-by-step screen-shot of the annotation tool, are provided in the Appendix B. The basic data cleanup is described in Appendix A. 3.1.1 Image and Text Prompts We presented crowd-sourced workers with a description of the capabilities of an assistant bot in Figure 5: Histograms showing distribution over number of nodes in a logical form (top) and utterance length in words (bottom) for each data type.",
"They were then asked to provide examples of commands that they might issue to an in-game assistant.",
"We refer to these instructions as prompts in the rest of this paper.",
"We asked crowd-workers to play creative-mode Minecraft with our assistant bot, and they were instructed to use the in-game chat to direct the bot as they chose.",
"The game sessions were capped at 10 minutes and players in this setting had no prior knowledge of the bot's capabilities or the grammar.",
"We refer to these instructions as Interactive in the rest of this paper.",
"The instructions of this setting are included in Appendix B.2.",
"Both prompts and interactive instructions come without a reference logical form and need to be annotated.",
"To facilitate this process, we designed a multi-step web-based tool which asks users a series of multiple-choice questions to determine the semantic content of a sentence.",
"The responses to some questions will prompt other more specific questions, in a process that mirrors the hierarchical structure of the grammar.",
"The responses are then processed to produce the complete logical form.",
"This allows crowd-workers to provide annotations with no knowledge of the specifics of the grammar described above.",
"A pictorial representation of the annotation process is shown in Figure 3 and a more detailed explanation of the process along with screen-shots of the tool is given in Appendix B.3.",
"We used a small set of tasks that were representative of the actual annotations to select skilled crowd-sourced workers by manually verifying the accuracy of responses on these.",
"Each utterance in our collection of prompts and interactive chats was shown to three different qualified annotators and we included the utterance and logical form in the dataset only if at least 2 out of 3 qualified annotators agreed on the logical form output.",
"The total number of utterances sent to turkers was 6,775.",
"Out of these, 6,693 had at least 2/3 agreements on the logical form and were kept.",
"Of these, 2,872 had 3/3 agreements.",
"The final dataset has 4,532 annotated instructions from the prompts setting (Section 3.1.1), and 2,161 from interactive play (Section 3.1.2).",
"The exact instructions shown to Turkers in the annotation tools are reproduced in Figures 9 and 11 in supplementary.",
"As in (Yih et al., 2016), we have found that careful design of the annotation tool leads to significant improvements in efficiency and accuracy.",
"In particular, we re-affirm the conclusion from (Yih et al., 2016) that having each worker do one task (e.g. labeling a single node in the tree) makes annotation easier for workers.",
"Since the different data collection settings described in Section 3.1 imposed different constraints and biases on the crowd-sourced workers, the distribution of actions in each subset of data is therefore different.",
"The action frequencies of each subset are shown in Figure 4.",
"Some crowd-sourced commands describe an action that is outside the scope of the grammar.",
"To account for this, users of the annotation tool are able to mark that a sentence is a command to perform an action that is not covered by our grammar yet.",
"The resulting trees are labeled as OTHERACTION , and their frequency in each dataset in shown in Figure 4.",
"Annotators still have the option to label other nodes in the tree, such as the action's LOCATION or REFERENCE OBJECT .",
"In both the prompts and interactive data, OTHERACTION amounted to approximately 14% of the data.",
"For each of our data types, Figure 5 show a histogram of sentence length and number of nodes.",
"On an average interactive data has shorter sentences and smaller trees.",
"We show the linguistic styles and choice of words of the data sources by displaying the surface forms of a set of trees.",
"We randomly picked trees of size (number of nodes) 7 that appear in both data sources, and then for the same tree structure, we looked at the utterances corresponding to that tree.",
"We show some representative examples in table 1.",
"We show more examples of the data in the Appendix D 4 Related Work There have been a number of datasets of natural language paired with logical forms to evaluate semantic parsing approaches, e.g. (Price, 1990; Tang and Mooney, 2001; Cai and Yates, 2013; Wang et al., 2015; Zhong et al., 2017).",
"The dataset presented in this work is an order of magnitude larger than those in (Price, 1990; Tang and Mooney, 2001; Cai and Yates, 2013) and is similar in scale to the datasets in (Wang et al., 2015), but smaller than (Zhong et al., 2017).",
"In addition to mapping natural language to logical forms, our dataset connects both of these to a dynamic environment.",
"In (Lauria et al., 2001; Bos and Oka, 2007; Tellex et al., 2011; Matuszek et al., 2013; Thomason et al., 2019) semantic parsing has been used for interpreting natural language commands for robots.",
"In our paper, the robot is embodied in the Minecraft game instead of in the physical world.",
"In (Boye et al., 2006) semantic parsing has been used for spoken dialogue with an embodied character in a 3-D world with pattern matching and rewriting phases.",
"In our work, the user along with the assistant is embodied in game and instructs using language.",
"We go from language to logical forms end-to-end with no pattern match necessary.",
"Semantic parsing in a voxel-world recalls (Wang et al., 2017), where the authors describe a method for building up a programming language from a small core via interactions with players.",
"We demonstrate the results of several neural parsing models on our dataset.",
"In particular, we show the results of a re-implementation of (Dong Prompts bot move to where the tree is dig a large size hole to put these waste particles into the hole please build a sphere on that location hey bot can you dig a 5 by 5 hole for me Interactive find tree dig large hole build a sphere over here dig a 5 x 5 hole Table 1: Choice of words across different data sources for the same logical form (per column).",
"and Lapata, 2016) adapted to our grammar, and a straightforward fine-tuned BERT model (Devlin et al., 2018).",
"There have been several other papers proposing neural architectures for semantic parsing, for example (Jia and Liang, 2016; Zhong et al., 2017; Wang et al., 2018; Hwang et al., 2019); in particular (Hwang et al., 2019) uses a BERT based model.",
"In those papers, as in this one, the models are trained with full supervision of the mapping from natural language to logical forms, without considering the results of executing the logical form (in this case, the effect on the environment of executing the actions denoted by the logical form).",
"There has been progress towards weakly super-vised semantic parsing (Artzi and Zettlemoyer, 2013; Liang et al., 2016; Guu et al., 2017) where the logical forms are hidden variables, and the only supervision given is the result of executing the logical form.",
"There are now approaches that have shown promise without even passing through (dis-crete) logical forms at all (Riedel et al., 2016; Nee-lakantan et al., 2016).",
"We hope that the dataset introduced here, which has supervision at the level of the logical forms, but whose underlying grammar and environment can be used to generate essentially infinite weakly supervised or execution rewards, will also be useful for studying these models.",
"Minecraft, especially via the MALMO project (Johnson et al., 2016) has been used as a base environment for several machine learning papers.",
"It is often used as a testbed for reinforcement learning (RL) (Shu et al., 2017; Udagawa et al., 2016; Alaniz, 2018; Oh et al., 2016; Tessler et al., 2017).",
"In these works, the agent is trained to complete tasks by issuing low level actions (as opposed to our higher level primitives) and receiving a reward on success.",
"Others have collected large-scale datasets for RL and imitation learning (Guss et al., 2019a,b).",
"Some of these works (e.g. (Oh et al., 2017)) do consider simplified, templated language as a method for composably specifying tasks, but training an RL agent to execute the scripted primitives in our grammar is already nontrivial, and so the task space and language in those works is more constrained than what we use here.",
"Nevertheless, our work may be useful to researchers interested in RL (or imitation): using our grammar and executing in game can supply (hard) tasks and descriptions, and demonstrations.",
"Another set of works (Kitaev and Klein, 2017; Yi et al., 2018) have used Minecraft for visual question answering with logical forms.",
"Our work extends these to interactions with the environment.",
"Finally, (Allison et al., 2018) is a more focused study on how a human might interact with a Minecraft agent; our collection of free generations (see 3.1.1) includes annotated examples from similar studies of players interacting with a player pretending to be a bot.",
"In order to assess the challenges of the dataset, we implement two models which learn to read a sentence and output a logical form by formulating the problem as a sequence-to-tree and a sequence-to-sequence prediction task respectively.",
"Our first model adapts the Seq2Tree approach of (Dong and Lapata, 2016) to our grammar.",
"In short, a bidirectional RNN encodes the input sentence into a sequence of vectors, and a decoder recursively predicts the tree representation of the logical form, starting at the root and predicting all of the children of each node based on its parent and left siblings and input representation.",
"Sentence Encoder and Attention: We use a bidirectional GRU encoder (Cho et al., 2014) which encodes a sentence of length T s = ( w 1 , . . . w T ) into a sequence of T dimension d vectors: f GRU ( s ) = ( h 1 , . . . , h T ) R d T Tree Decoder: The decoder starts at the root, computes its node representation and predicts the state of its children, then recursively computes the representations of the predicted descendants.",
"Similarly to Seq2Tree, a node representation r n is computed based on its ancestors and left siblings.",
"We also found it useful to condition each of the node representation on the encoder output explicitly for each node.",
"Thus, we compute the representation r n t and recurrent hidden state g n t for node n t as: r n t = attn ( v n t + g n t 1 , ( h 1 , . . . , h T ); M ) (1) g n t = f rec ( g n t 1 , ( v (cid:48) n t + r n t )) (2) Where attn is multi-head attention, M R d d K is a tree-wise parameter, f rec is the GRU recurrence function, and v (cid:48) n t is a node parameter (one per category for categorical nodes), and n t 1 denotes either the last predicted left sibling if there is one or the parent node otherwise.",
"Prediction Heads: Finally, the decoder uses the computed node representations to predict the state of each of the internal, categorical, and span nodes in the grammar.",
"We denote each of these sets by I , C and S respectively, and the full set of nodes as N = I C S .",
"First, each node in N is either active or inactive in a specific logical form.",
"We denote the state of a node n by a n { 0 , 1 } .",
"All the descendants of an inactive internal node n I are considered to be inactive.",
"Additionally, each categorical node n C has a set of possible values C n ; its value in a specific logical form is denoted by the category label c n { 1 , . . . , | C n |} .",
"Finally, active span nodes n S for a sentence of length T have a start and end index ( s n , e n ) { 1 , . . . , T } 2 .",
"We compute, the representations r n of the nodes as outlined above, then obtain the probabilities of each of the labels by: n N , p ( a n ) = ( (cid:104) r n , p n (cid:105) ) (3) n C , p ( c n ) = softmax ( M cn r n ) (4) n S , p ( s n ) = softmax ( r T n M sn ( h 1 , . . . , h T )) p ( e n ) = softmax ( r T n M en ( h 1 , . . . , h T )) (5) where the following are model parameters: n N , p n R d n C , M cn R d d n S , ( M sn , M en ) n R d d 2 Let us note the parent of a node n as ( n ) .",
"Given Equations 3 to 5, the log-likelihood of a tree with states ( a , c , s , e ) given a sentence s is then: L = (cid:88) n N a ( n ) log( p ( a n )) + (cid:88) n C a n log( p ( c n )) + (cid:88) n S a n (cid:16) log( p ( s n )) + log( p ( e n )) (cid:17) (6) Overall, our implementation differs from the original Seq2Tree in three ways, which we found lead to better performance in our setting.",
"First, we replace single-head with multi-head attention.",
"Secondly, the cross-attention between the decoder and attention is conditioned on both the node embedding and previous recurrent state.",
"Finally, we replace the categorical prediction of the next node by a binary prediction problem: since we know which nodes are eligible as the children of a specific node (see Figures 1 and 2), we find that this enforces a stronger prior.",
"We refer to this modified implementation as SentenceRec.",
"5.2 Sequence to Sequence Model Our second approach treats the problem of predicting the logical form as a general sequence-to-sequence (Seq2Seq) task; such approaches have been used in semantic parsing in e.g. (Jia and Liang, 2016; Wang et al., 2018).",
"We take the approach of (Jia and Liang, 2016) and linearize the output trees: the target sequence corresponds to a Depth First Search walk through the tree representation of the logical form.",
"More specifically the model needs to predict, in DFS order, a sequence of tokens corresponding to opening and closing internal nodes, categorical leaves and their value, and span leaves with start and end sequences.",
"In practice, we let the model predict span nodes in two steps: first predict the presence of the node, then predict the span value, using the same prediction heads as for the SentenceRec model (see Equation 5 above).",
"With this formalism, the logical form for e.g. build a large blue dome on top of the walls will be: (ACTION_TYPE:BUILD, OPEN:SCHEMATIC, HAS_SIZE, SIZE_SPAN-(2,2), HAS_COLOR, COLOR_SPAN-(3,3), HAS_NAME, NAME_SPAN-(4,4), CLOSE:SCHEMATIC, OPEN:LOCATION, LOC_TYPE:REF_OBJECT, REL_DIR:UP, OPEN:REF_OBJECT,HAS_NAME,NAME_SPAN-(9,9),CLOSE:REF_OBJECT, CLOSE:LOCATION) We train a BERT encoder-decoder architecture on this sequence transduction task, where the training loss is a convex combination of the output sequence log-likelihood and the span cross-entropy loss.",
"Pre-trained Sentence Encoder: Finally, recent work has shown that using sentence encoder that has been pre-trained on large-scale language modeling tasks can lead to substantial performance Acc.",
"improvements (Song et al.,",
"2019).We use the pre-trained DistilBERT model of (Sanh et al., 2019) as the encoder of our sequence-to-sequence model, and also propose a version of the SentenceRec which uses it to replace the bidirectional RNN.",
"In this Section, we evaluate the performance of our baseline models on the proposed dataset.",
"Training Data: The CAIP datasets consists in a total of 6693 annotated instruction-parse pairs.",
"In order for our models to make the most of this data while keeping the evaluation statistically significant, we create 5 different train/test splits of the data and report the average performance of models trained and evaluated on each of them.",
"In each case, we hold out 650 examples from Prompts and 350 from Interactive for testing, and use the remaining 5693 as the training set.",
"Modeling Choices: For the end-to-end trained SentenceRec model, we use a 2-layer GRU sentence encoder and all hidden layers have dimension d = 256 .",
"We use pre-trained word embeddings computed with FastText with subword information (Bojanowski et al., 2017).",
"The decoder uses a GRU recurrent cell and 4-headed attention.",
"The Seq2Seq model uses a variant of the bert-base-uncased provided in the Transformer library 4 with 6 encoding and decoding layers.",
"For the Seq2Seq model and the SentenceRec with pre-trained encoder, we use the distilbert-base-uncased encoder from the same library.",
"The Seq2Seq model uses beam search decoding with 15 beams.",
"All models are trained with the Adam optimizer with quadratic learning rate decay.",
"We provide our model and training code along with the dataset for reproducibility purposes.",
"Overview of Results: Table 2 provides the average accuracy (computed as the proportion of logical forms that are entirely accurately predicted) and standard deviation across all five splits, as well as the contributions of the Interactive and Prompts 4 https://github.com/huggingface/transformers N=2 N=5 N=15 Joint 67.7 72.76 75.7 Interactive 83.83 88.34 90.63 Prompts 59.02 64.37 67.66 Table 3: Recall at N for the Seq2Seq model beam search.",
"data.",
"The first observation is that using a pre-trained encoder leads to a significant improvement, with a 10 point boost in accuracy.",
"On the other hand, while the Seq2Seq model is more general and makes less use of our prior knowledge of the structure of logical forms, it does marginally better than the recursive prediction model (although within one standard deviation).",
"Secondly, although the models are trained on more data provided from the Prompts setting than from Interactive play, they all do better on the latter.",
"This is consistent with previous observations on the dataset statistics in Section 3.2.3 which find that players tend to give shorter instructions with simpler execution.",
"Finally, we note that one of the advantages of having the parser be part of an interactive agent is that it can ask the player for clarification and adapt its behavior when it is made aware of a mistake (Yao et al., 2019).",
"In that spirit, Table 3 provides Recall at N numbers, which represent how often the true parse is within the N first elements of the beam after beam search.",
"Recall at 2 does provide a consistent boost over the accuracy of a single prediction, but even the full size 15 beam does not always contain the right logical form.",
"Error Analysis: We further investigate the errors of the Seq2seq models on one of the data splits.",
"We find that the model still struggles with span predictions: out of 363 errors, 125 only make mistakes on spans (and 199 get the tree structure right but make mistakes on leaves).",
"Figure 6 shows the nodes which are most commonly mistaken, with the number of false positive and false negatives out of these 363 mistakes.",
"Unsurprisingly, the most commonly confused span leaf is has tag, which we use as a miscellaneous marker.",
"Aside from that has tag however, the span mistakes are evenly spread over all other leaves.",
"The next most common source of mistakes comes from the model struggling between identifying whether a provided location corresponds to the target of the action or to the reference object, and to identify instructions which imply a repetition.",
"The former indicates a lack of compositionality in the input representation: the model correctly identifies that a location is mentioned, but fails to identify its context.",
"Repeat conditions on the other hand challenge the model due to the wide variety of possible stop condition, a problem we suggest future work pay special attention to.",
"In this work, we have described a grammar over a mid-level interface for a Minecraft assistant.",
"We then discussed the creation of a dataset of natural language utterances with associated logical forms over this grammar that can be executed in-game.",
"Finally, we showed the results of using this new dataset to train several neural models for parsing natural language instructions.",
"Consistent with recent works, we find that BERT pre-trained models do better than models trained from scratch, but there is much space for improvement.",
"We believe this data will be useful to researchers studying semantic parsing, especially interactive semantic parsing, human-robot interaction, and even imitation and reinforcement learning.",
"The code, dataset and annotation tools described in the paper have been open-sourced 5 ."
] | [
"objective",
"abstain",
"abstain",
"method",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain"
] |
[
"Regularization methods applying input perturbation have drawn considerable attention and have been frequently explored for NMT tasks in recent years.",
"Despite their simplicity and effectiveness, we argue that these methods are limited by the under-fitting of training data.",
"In this paper, we utilize prediction difference for ground-truth tokens to analyze the fitting of token-level samples and find that under-fitting is almost as common as over-fitting.",
"We introduce prediction difference regularization (PD-R), a simple and effective method that can reduce over-fitting and under-fitting at the same time.",
"For all token-level samples, PDR minimizes the prediction difference between the original pass and the input-perturbed pass, making the model less sensitive to small input changes, thus more robust to both perturbations and under-fitted training data.",
"Experiments on three widely used WMT translation tasks show that our approach can significantly improve over existing perturbation regularization methods.",
"On WMT16 En-De task, our model achieves 1.80 SacreBLEU improvement over vanilla transformer.",
"Neural machine translation models have achieved great success in recent years (Sutskever et al., 2014; Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017).",
"Despite their efficiency and superb performance, NMT models are prone to over-fitting that universal regularization techniques such as dropout (Hinton et al., 2012) and label smoothing (Szegedy et al., 2016) have been indispensable.",
"However, over-fitting is still a significant problem for NMT, especially for small and medium tasks, which motivates researchers to constantly explore more specialized and sophisticated regularization techniques.",
"Particularly, regularization methods applying input perturbation have been frequently explored for NMT models in recent years (Bengio et al., 2015; Wu et al., 2019; Sato et al., 2019; Takase and Kiyono, 2021).",
"In these methods, neural models are trained to maximize the likelihood of perturbed samples that perturbed by a certain type of perturbations, with a primary intention to enhance model's robustness to perturbations, since neural models have been discovered fragile to small input noises (Szegedy et al., 2014; Liang et al., 2018; Belinkov and Bisk, 2018).",
"In the past few years, many types of perturbations have been proposed to machine translation and been shown effective, including word-dropout (Gal and Ghahramani, 2016), word-replacement (Bengio et al., 2015; Wu et al., 2019) and adversarial perturbation (Miyato et al., 2017; Sato et al., 2019), etc.",
"In this paper, unlike previous works which are devoted to finding stronger perturbations and more appropriate perturbation schedules, we rethink the existing perturb-and-fit mechanism and prove that indiscriminate fitting of perturbed samples ignores and aggravates under-fitting, which dramatically limits the effectiveness of perturbation regularization.",
"We further propose prediction difference regularization (PD-R), a simple and effective method that can alleviate over-fitting and under-fitting at the same time and significantly enhance the effectiveness of perturbation regularization.",
"Specifically, we use the prediction difference for ground-truth labels before and after input perturbation as an indicator of over-fitting and under-fitting for token-level samples.",
"Quantitative analysis shows that a considerable part of token-level predictions get improved after input perturbation, indicating that the model is less fitted to those original samples compared to the perturbed samples, which has been ignored by previous works.",
"We then divide labels in a batch into relatively under-fitted and over-fitted subsets according to real-time 7665 prediction difference and train only one subset to fit the perturbed inputs and the other subset to fit the original inputs.",
"Experiments show that training only the relatively under-fitted subset to further fit the perturbed inputs dramatically degrade the model performance, while the opposite gets better results than the existing indiscriminate way.",
"This indicates that existing methods are hindered by the excessive fitting of perturbed data.",
"We further propose to use prediction difference as a regularization term, where the prediction difference is the divergence of prediction distribution caused by input perturbation.",
"Since the value of prediction difference reflects the severity of over-fitting or under-fitting, both of which are cases we want to avoid for training models, regularizing prediction difference has been a natural solution to avoid above fitting problems.",
"By combining cross-entropy loss and the prediction difference term, a model can be trained to fit training data with control of over-fitting and under-fitting.",
"We apply PD-R on simplest word dropout regularization and conduct experiments on three widely used WMT translation tasks covering small-scale, medium-scale, and large-scale data sets.",
"Our method significantly improves over existing perturbation regularization methods.",
"On WMT16 En-De translation task, our method achieves 1.80 SacreBLEU improvement over vanilla transformer model and 1.12 SacreBLEU over traditional word dropout regularization.",
"In this section, we introduce basic principles of neural machine translation, representative types of perturbations as well as their training objectives.",
"For NMT, the probability of a target sentence Y = y 1: J conditioned on its parallel source sentence X = x 1: I is established based on chain rule:",
"where represents the parameters of the model, y 0 and y J +1 are special tokens representing the beginning and end of a sentence respectively.",
"On this basis, NMT models are trained with the cross-entropy loss to minimize the negative log-likelihood of all samples in the training set D : L = L ( D , ) = 1 D (cid:88) ( X,Y ) D (cid:96) ( X, Y, ) , (2) where D = { ( X n , Y n ) } | V | n =1 , | V | is the size of the data set, (cid:96) ( X, Y, ) = log p ( Y | X, ) .",
"Word Dropout and Replacement The simplest way to apply perturbation is to mask or replace one or more tokens of the original input sequence.",
"The resulting sequence x is sampled from the original sequence and the perturbation sequence: x i = (cid:40) x i , with probability 1 , x pi , with probability , (3) where 0 < < 1 is the hyper-parameter of bernoulli sampling and x pi is the i-th word of the perturbation sequence.",
"Note that the perturbation sequence x p consists of zero vectors for word dropout (Gal and Ghahramani, 2016), and consists of random words sampled from the vocabulary with uniform or a particular distribution for word replacement (Bengio et al., 2015; Wu et al., 2019).",
"Adversarial Perturbation Adversarial Training (AdvT) tries to make perturbation that maximize the loss function, which is believed more effective for regularization.",
"As described in Miyato et al. (2017) and Sato et al. (2019), the perturbed input embedding for x i can be computed as follows: e i = e i + r i , (4) where e i is original embedding of i-th source word, is a scalar hyper-parameter that controls the norm of the perturbation, and r i is the worst case unit perturbation vector approximated by gradient back-propagation(Goodfellow et al., 2015): r i = g i || g i || 2 , g i = e i L ( D , ) , (5) where g i is the gradient of a model's loss function with respect to its input embedding e i .",
"In most cases, the inputs of the decoder side can also be perturbed in the same way as the encoder side.",
"For scheduled sampling (Bengio et al., 2015) however, perturbation is limited at the decoder side.",
"Regularization For word dropout and word replacement, the model is trained to fit the perturbed samples X :",
"where D is the perturbed data set.",
"For adversarial training, two forward passes and two backward passes are required for computing perturbation vectors and then training with them.",
"The model is trained to fit both the original samples and adversarial samples with loss function: L = L ( D , ) + L ( D , ) , (7) where is a hyper-parameter.",
"It is usually believed that a model's prediction for ground-truth target tokens will be hindered when the input is perturbed, which has been the initial motivation for perturbation regularization.",
"However, this conclusion is not necessarily true in experiment for many reasons: Firstly, neural networks are complex and may not behave in an ideally logical way.",
"Secondly, perturbations are randomly produced and may have complex properties.",
"For example, word-replacement may induce synonyms or heteronyms, and high-dimension embedding perturbation is hard to interpret.",
"Thirdly, the model is uncertain during training due to parameter dropout, which further brings uncertainty to the model's reaction to perturbations.",
"In this section, we analyze the influence of perturbations leveraging token-level prediction difference.",
"Here the prediction difference is defined as the change of model's prediction probabilities for ground-truth target tokens (Gu and Tresp, 2019; Li et al., 2019a): p ( y j ) = p ( y j | y 0: j 1 , X, ) p ( y j | y 0: j 1 , X, ) .",
"We apply random perturbations to samples in the test set and divide all target labels into two subsets according to their prediction change: positively influenced subset S p containing labels whose prediction probabilities get bigger after input perturbation",
"and negatively influenced subset S n containing labels whose prediction probabilities get smaller.",
"We compute the quantitative proportion and the average value of p for these two subsets to evaluate the influence of different perturbations.",
"Since the prediction difference is also under the influence of parameter dropout during training, we also conduct experiments both with and without parameter dropout difference.",
"The original pass and the perturbed pass are carried out by two different sub-models if their parameter dropout mask is different.",
"In experiment, We use a transformer base model trained on WMT16 En-De data set and conduct our experiments on a test set which is a combination of 5 test sets from WMT16 to WMT20.",
"Our analysis covers different kinds of perturbations, including word-dropout, word-replacement, and adversarial perturbation.",
"The word-dropout and word-replacement probabilities are set as 0.05, and all perturbations are applied on both sides of the model.",
"As illustrated in figure 1, for any certain type of perturbation, the negative impact is principal, especially for adversarial perturbation.",
"However, the positive influence is non-negligible since the proportion of positively influenced tokens could reach 30%-40% for word-dropout and word-replacement.",
"With parameter dropout difference, the positive influence could get further bigger and become more crucial.",
"We attribute prediction difference to relatively over-fitting and under-fitting of token-level samples.",
"Since perturbations are very small, perturbed samples can be approximately viewed as good samples.",
"For one target label, if its prediction probability gets smaller after a small input perturbation, it indicates the model is relatively over-fitted to the original sample, while the contrary case is the reflection of relatively under-fitting.",
"With parameter dropout, predictions are carried out by sub-models, and these fitting problems also reflect the relative fitting bias of sub-models, which is also what we want to avoid.",
"As mentioned above, existing perturbation regularization methods are based on the motivation to enhance the model's performance against input perturbation and avoid over-fitting.",
"However, experiments show that a model could be better fitted to the perturbed data rather than the original data, which is regarded by us as a sign of relatively under-fitting.",
"This indicates that training a model 7667",
"We further carry out selective training for word-dropout regularization, where one subset is trained to fit the perturbed inputs and the other subset is trained to fit the original inputs.",
"As presented in table 1, training only S n gets better results than existing indiscriminate training, while training only S p gets worse results than vanilla transformer.",
"This implies that the existing method suffers from degeneration caused by aggravated under-fitting.",
"Since both positive and negative prediction difference is a sign of improper fitting of samples, we",
"therefore propose prediction difference regularization (PD-R), to regularize the model directly with the prediction difference:",
"where R [ ] is the distance of two distributions, ( X, Y ) is a sample from data set D , represents all prediction steps, P ( | X, Y < , (cid:48) ) is the prediction distributions for all steps conditioned on original source input X , target teacher forcing target input Y < and sub-model with parameters (cid:48) , and P ( | X, Y < , (cid:48)(cid:48) ) is the prediction distributions for all steps conditioned on perturbed source input X , perturbed target teacher forcing target input Y < and sub-model with parameters (cid:48)(cid:48) .",
"The total regularization loss is averaged over all samples in the data set: LPD R ( D , ) = 1 D (cid:88) ( X,Y ) D (cid:96) PD R ( X, Y, ) .",
"In experiment, we apply PD-R on simplest word-dropout perturbation with = 0 .",
"05 in",
"Eq.(3) and = 1 .",
"0 in",
"Eq.(11) without further hyper-parameter search.",
"R [ ] in",
"Eq.(9) is implemented as L1 distance, which performs slightly better than KL-divergence in our experiments.",
"We evaluate PD-R on three public WMT machine translation tasks and compare it with representative related works.",
"To fully verify the effectiveness of our method on NMT, we conduct experiments on three machine translation tasks, including small-scale WMT16 English-Romanian(En-Ro), medium-scale WMT16 English-German (En-De), and large-scale WMT17 Chinese-English (Zh-En).",
"English-Romanian This data set contains about 0.6M processed parallel sentence pairs tokenized by Moses toolkit (Koehn et al., 2007) and segmented with 40K merge operations using BPE (Sennrich et al., 2016).",
"We use news-dev 2016 and news-test 2016 as the validation set and test set respectively.",
"English-German The WMT16 En-De data set consists of about 4.5M parallel sentences pairs coded with 30K BPE merge-operations.",
"For evaluation, we average the last 5 epochs and report results on all test sets from WMT2016 to WMT2020.",
"Chinese-English Our data set consists of over 20M parallel sentence pairs.",
"The English and Chinese sentences are tokenized with Moses toolkit and Stanford Segmenter respectively, which are further applied 32K BPE segmentation.",
"We use newsdev2016 for validation and newstest2017 for testing.",
"To fairly compare each method, we reproduce all compared methods with transformer model (Vaswani et al., 2017) using open-source toolkit Fairseq (Ott et al., 2019), with the same model configuration and hardware facilities.",
"We use transformer base configuration for all experiments, with 6 encoder and decoder layers, 512 hidden dimensions, 8 attention heads and 2048 FFN dimensions.",
"We train all models with 4000 warm-up steps, initial learning rate of 7 e 4 , label smoothing factor of 0.1, and Adam optimizer with 1 = 0 .",
"9 , 2 = 0 .",
"98 and (cid:15) = 1 e 9 as Vaswani et al. (2017).",
"We set the dropout rate to 0.2 for small-scale En-Ro task and 0.1 for En-De and Zh-En tasks.",
"All experiments are conducted on 4 GeForce RTX 3090 GPUs with a distributional batch-size of 4096 tokens each GPU and an overall accumulated batch-size of 4096 8 tokens.",
"During inference, we use beam size of 4 and length penalty of 0.6 for all tasks.",
"For En-Ro and En-De translation tasks, we share the vocabulary for source and target and apply three-way weight tying(TWWT) (Press and Wolf, 2017) for training, the vocabulary sizes of both tasks are limited to 32768 tokens.",
"We train models for 50 epochs for both tasks.",
"For Zh-En translation task, the Chinese and English vocabulary sizes are 44K and 33K respectively, and models are trained for 300K steps.",
"We reproduce four representative perturbation regularization methods and recently proposed R-Drop for comparison.",
"Word-Drop We implement word-dropout (Gal and Ghahramani, 2016) by randomly replace word embeddings with zero vectors with = 0 .",
"05 in",
"Eq.(3).",
"SSE-SE The SSE-SE is a word-replacement method that randomly replaces input tokens with other tokens in vocabulary.",
"As in Wu et al. (2019), we set = 0 .",
"01 in",
"Eq.(3) and sample perturbation sequence with uniform distribution.",
"Scheduled Sampling Scheduled sampling (Ben-gio et al., 2015) is a word-replacement method that randomly replace target-side input tokens with model predictions.",
"Each model prediction token is sampled using model's output distribution.",
"The replacement rate follows a curriculum learning strategy: i = k k + exp ( i/k ) , (12) where i represents training steps, and k is a hyper-parameter depending on the speed of convergence.",
"Our implementation of scheduled sampling for transformer is parallel as in Mihaylova and Martins (2019) and Duckworth et al. (2019).",
"We set k = (4590 , 29350 , 36150) for En-Ro, En-De and Zh-En tasks respectively.",
"The hyper-parameter k is set to make sure that i is decayed to 0.9 at the end of training.",
"AdvT For adversarial training, we set = 1 in",
"Eq.(4) and = 1 in",
"Eq.(7) as Sato et al. (2019).",
"R-Drop R-Drop (Liang et al., 2021) is a very recent work whose implementation is similar to PD-R.",
"However, its motivation is to restrict the freedom of parameters by reducing sub-model divergence, while ours is to avoid token-level sample fitting problems reflected by prediction difference.",
"Since predictions are carried out by sub-models during training, the fitting bias of sub-models is also included in the prediction difference.",
"From this point, R-Drop can be viewed as a sub-component of PD-R.",
"SacreBLEU (Post, 2018) of compared methods and PD-R on three translation tasks are illustrated in table 2 and table 3.",
"We apply PD-R on encoder-side word-dropout, decoder-side word-dropout, and both-side word-dropout.",
"For all compared methods involving input perturbation, perturbation is applied on both sides of the model except scheduled sampling.",
"Note that selective training of word-dropout regularization (only S n , referred as 'ST') is also presented for comparison with Word-Drop and PD-R.",
"Experiments show that existing perturbation regularization methods are similarly effective compared to each other, which is consistent with Takase and Kiyono (2021).",
"R-Drop and selective train-ing(ST) of word-dropout regularization are consistently better than existing perturbation regularization.",
"Our PD-R against word-dropout significantly improves over word-dropout and other perturbation regularization methods on all three tasks, and also performs better than R-Drop on small-scale and medium-scale tasks.",
"On WMT16 En-De, PDR achieves 1.80 SacreBLEU improvement over vanilla transformer, 1.12 SacreBLEU improvement over existing word-dropout perturbation regularization, and 0.73 SacreBLEU improvement over R-Drop.",
"On large scale WMT17 Zh-En task though, the improvement of perturbation regularization gets smaller compared to small and medium tasks, and 7670",
"R-Drop performs better than PD-R.",
"We attribute it to the fact that large-scale tasks are sufficient in data, regularization in data level has become a burden rather than help while regularizing sub-model bias is still beneficial.",
"Longer sentences contain more complex word combinations that are unseen or seldom seen in the training set and suffer more from exposure bias (Ranzato et al., 2016; Zhang et al., 2019).",
"Performance on long sentences reflects the model's robustness to unexpected inputs.",
"In experiment, we evaluate the performance of different models on WMT16 En-De task.",
"We combine 5 test sets from WMT16 to WMT20 and divide samples into 7 subsets according to sentence length.",
"As shown in figure 2a, PD-R achieves better results in all subsets, and the improvement tends to become larger as the sentence length grows, which implies that PD-R can better handle unexpected inputs of long sentences.",
"To better evaluate model's robustness to perturbations, we conduct perturbation attack for all models, similar as Michel and Neubig (2018) and Moradi and Samwald (2021).",
"In experiment, we apply word-dropout on source sentences and generate target sentences based on perturbed source sentences.",
"Experiment results in figure 2b show that PD-R against word-dropout and existing word-dropout regularization are consistently better than the base model, and the gap becomes larger as the proportion of perturbation grows, which confirms that our approach does improve the model's robustness to perturbation.",
"Note that our experiments on other types of perturbation attack conclude that a model is robust to a certain type of perturbation only if the model is trained on this kind of perturbation, so comparison of different perturbation regularization methods under one certain type of perturbation attack is not the focus of our discussion in this subsection.",
"Ablation study in table 4 shows that training only the positively influenced subset S p using PD-R is also effective, even more effective than training only S n .",
"This indicates that PD-R can properly handle both under-fitting and over-fitting.",
"As mentioned in section 3, sub-model bias is also a source of improper fitting problems.",
"tinguish the contribution of parameter dropout and word-dropout, we conduct experiments where the difference of two passes is restricted to only parameter dropout or only word-dropout.",
"We also conduct experiments on the encoder side and decoder side separately.",
"Experiment results show that parameter dropout is an important source of improvement, word-dropout is nearly as important as parameter dropout for PD-R, while using both of them gets the best results.",
"As for the difference between the encoder side and decoder side, the decoder-side word-dropout contributes more on the En-Ro task, while on the En-De task the contribution of the encoder side is much bigger, this is also true when the two passes have no parameter dropout difference.",
"The encoder side gets more important on larger data set, which is consistent with the main results.",
"Works involving Input Perturbation Apart from the works mentioned above, some works introduce subword uncertainty at the subword segmentation stage, including sampling multiple subword candidates (Kudo, 2018), applying subword dropout (Park et al., 2020) or producing adversarial subword segmentation (Provilkov et al., 2020).",
"For character-level tasks, there are also works using character-level perturbation including character-level random deletion, insertion, substitution and swap (Belinkov and Bisk, 2018; Karpukhin et al., 2019) and adversarial substitution (Ebrahimi et al., 2018).",
"The mixup technique for NLP tasks can also be seen as a form of perturbation where samples are perturbed (mixed) with other samples for data augmentation or generation diversity (Guo et al., 2020; Li et al., 2021; Fang et al., 2022).",
"Our work can be regarded as one example of perturbation regularization.",
"However, unlike previous perturbation regularization works which are focused on finding better perturbation, our work improves the training mechanism and can be applied to any type of perturbations.",
"Influence of Perturbation Perturbation is commonly considered as a negative factor for neural models by previous works (Szegedy et al., 2014; Liang et al., 2018; Belinkov and Bisk, 2018), which is generally correct with the fact that perturbation does degrade the training and inference accuracy of a model.",
"Belinkov and Bisk (2018) demonstrates that the performance of NMT systems degrades monotonously as input modification increases, which is consistent with our observations.",
"Based on the above facts, perturbation regularization is frequently studied to enhance models' robustness to unexpected inputs at the inference stage.",
"From a data selection perspective, Khayrallah and Koehn (2018) and Briakou and Carpuat (2021) demonstrate that noisy or semantically divergent data is harmful to the training of NMT models.",
"In this paper, we find that the interaction between perturbation and model is complicated and positive influence of perturbation is very common, which is further regarded by us as a sign of relatively under-fitting and a variable that needs to be restricted.",
"Prediction Difference Prediction difference is usually considered as a reflection of the relationship between input and output and is often used to analyze model behavior.",
"Zintgraf et al. (2017) utilizes prediction difference to visualize the importance of a specific input image area to model decision.",
"Li et al. (2019b) uses the prediction difference of a target word when a source word is removed to induce word alignment and find it more accurate than attention weights.",
"Guo et al. (2019) finds that adversarial examples can be accurately and efficiently detected via prediction difference.",
"Liang et al. (2021) proposes R-Drop and take prediction difference as a regularization term to regularize sub-model divergence.",
"In this work, prediction difference is used as an analytical tool to detect improper fitting problems and also a regularization term to regularize the model's fitting bias to 7672 token-level samples.",
"In this paper, we propose to use probability difference for ground-truth tokens before and after input perturbation as an indicator to analyze the influence of different types of perturbations and attribute probability difference to improper fitting of token-level samples.",
"We find that under-fitting is almost as common as over-fitting, which is totally ignored and further aggravated by existing perturbation regularization methods.",
"To regularize both under-fitting and over-fitting, we use prediction difference as a regularization term (PD-R) and apply it on word-dropout regularization.",
"Our method achieves significant improvement over existing methods on three WMT translation tasks and is proved more robust to input perturbation.",
"We thank all the anonymous reviewers for their insightful and valuable comments.",
"This work was supported by National Key R&D Program of China (NO. 2017YFE0192900)."
] | [
"abstain",
"method",
"result",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"result",
"method",
"result",
"other",
"other"
] |
[
"Program understanding is a fundamental task in program language processing.",
"Despite the success, existing works fail to take human behaviors as reference in understanding programs.",
"In this paper, we consider human behaviors and propose the PGNN-EK model that consists of two main components.",
"On the one hand, inspired by the divide-and-conquer reading behaviors of humans, we present a partitioning-based graph neural network model PGNN on the upgraded AST of codes.",
"On the other hand, to characterize human behaviors of resorting to other resources to help code comprehension, we transform raw codes with external knowledge and apply pre-training techniques for information extraction.",
"Finally, we combine the two embeddings generated from the two components to output code embeddings.",
"We conduct extensive experiments to show the supe-rior performance of PGNN-EK on the code summarization and code clone detection tasks.",
"In particular, to show the generalization ability of our model, we release a new dataset that is more challenging for code clone detection and could advance the development of the community.",
"Our codes and data are publicly available at https://github.com/ RecklessRonan/PGNN-EK .",
"The past decades have witnessed the prosperity of programming platforms, such as Github and Stack Overflow .",
"These platforms generate massive open-source code 1 data that is named as Big Code in (Allamanis et al., 2018a).",
"To automate the software development and maintenance, based on the Software Naturalness hypothesis (Hindle et al., 2016), natural language processing (NLP) techniques have been applied in program understanding.",
"After that, a series of downstream programming language processing (PLP) tasks can be Corresponding Author 1 We interchangeably use code and program in this paper.",
"performed, including code summarization (Zhang et al., 2020; Ahmad et al., 2020; Liu et al., 2021) and code clone detection (Zhang et al., 2019; Wang et al., 2020).",
"Existing works for understanding programs mainly utilize three types of information: code context , code structure and external knowledge .",
"Specifically, code context refers to the token sequence in the code.",
"For code structure, each code can be parsed into various types of intermediate representations, such as AST (Abstract Syntax Tree), CFG (Control Flow Graph) and PDG (Program Dependence Graph).",
"These representations capture the structural information of codes.",
"Further, there also exists external knowledge associated with codes, such as API documentation and other exemplary codes.",
"Despite the success, all these models ignore considering human behaviors in reading programs.",
"Recently, (Bengio et al., 2021) suggest the potential futures of deep learning by comparing current AI methods with human learning abilities.",
"This further prompts us to revisit program understanding: Can we develop a model that understands programs like humans?",
"In the domain of programming education, how people understand codes is a topic that has been studied.",
"For example, based on knowledge base including syntactical knowledge (e.g., programming basics) and semantic knowledge (e.g., API docu-mentation), (Schulte et al., 2010) offer a bottom-up reading technique, which assumes that people be-gin with individual code lines and chunks, and then combine them into higher-level abstractions.",
"Further, (Park et al., 2016) state that when people read codes, reasoning about the hierarchical relationship of blocks, statements, expressions and variables is necessary.",
"Based on these studies, we conclude three key points for human understanding codes.",
"First, the transition of defined variables has to be traced.",
"Second, humans usually adopt a divide-and-conquer strategy, which divides codes based 5142 on statements and then understands codes from a local-to-global view.",
"Third, humans resort to external knowledge to comprehend codes, such as API documentation and code examples written by experts.",
"In this paper, inspired by human behaviors for code comprehension, we propose a novel P artitioning-based G raph N eural N etwork with E xternal K nowledge (PGNN-EK).",
"To capture code context and structure, PGNN-EK upgrades the traditional AST and defines a novel subtoken-based AST called S-AST.",
"In S-AST, we add edges between variables to trace the variable transitions, edges between adjacent tree leaves from left to right to enrich the context and structure information, and edges between sub-nodes corresponding to subtokens tokenized from user-defined identifiers to handle the Out of Vocabulary (OOV) problem (Karampatsis et al., 2020).",
"Details will be illustrated later.",
"After that, we first apply graph neural network (GNN) models on the S-AST to derive a code embedding.",
"To further implement the divide-and-conquer reading strategy, we partition the S-AST into multiple subgraphs, which follow the sequence of statements in the original code.",
"For each subgraph, we use GNN models to generate the subgraph embedding.",
"Then, these subgraph embeddings are fused to generate another code embedding.",
"For these two code embeddings, since they are both derived from S-AST, we further aggregate them.",
"On the other hand, to characterize the dependence on external knowledge for code comprehension, we traverse the AST of the original code to derive a sequence of tokens for syntactic knowledge and then add the API descriptions to the end for semantic knowledge.",
"We then apply CodeBERT (Feng et al., 2020) on the token sequence to capture external knowledge.",
"Finally, PGNN-EK generates the output code embedding by combining the embedding derived from S-AST and the one from external knowledge.",
"To evaluate the model performance, we conduct experiments on the code summarization task and code clone detection task, respectively.",
"Before we apply PGNN-EK on the code clone detection benchmarks in CodeXGLUE (Shi et al., 2021) extracted from the BigCloneBench 2014 dataset (Sva-jlenko et al., 2014), we notice from the leaderboard 2 that the results are incredibly high, where 2 https://microsoft.github.io/ CodeXGLUE/ the minimum F1 score is 0 .",
"949 .",
"Then we dive into the characteristics of the dataset and find that the functionalities of codes in the test set have all appeared in the training set.",
"Therefore, the dataset is very simple.",
"To further test the model's generalization ability, we construct a new dataset, where the test set contains codes whose functionality has never appeared in the training set.",
"This new dataset provides an insightful reference for further research in the community.",
"representation S-AST that can be used to handle the OOV problem in PLP.",
"We follow human behaviors in understanding codes and propose a novel model PGNN-EK that leverages code context, structure and external knowledge.",
"Specifically, we put for-ward a novel partitioning-based graph neural network model that can effectively use code context and structure.",
"We also present a code transformation method to utilize external knowledge in boosting comprehension.",
"We conduct extensive experiments on code summarization and code clone detection tasks to demonstrate the effectiveness of our model.",
"In particular, we identify the limitation of a benchmark dataset for code clone detection and release a new dataset that is more challenging.",
"Program understanding is a topic that has received wide attention.",
"Early works use either code context or structure information.",
"For example, taking codes as raw texts, some works use language models (Raychev et al., 2014; Allamanis et al., 2015), RNN-series (Zaremba and Sutskever, 2014; Dam et al., 2016) and attention (Iyer et al., 2016) to represent codes.",
"However, different from natural language, programs are more structural, which can be parsed into intermediate graphs, such as AST.",
"Many works for code analysis are then proposed based on AST, such as AST-based LSTM (Wei and Li, 2017), AST-based CNN (Yu et al., 2019), ASTNN (Zhang et al., 2019), code2vec (Alon et al., 5143 MethodDeclaration Modifier Basic-Type get public int Formal-Parameter Formal-Parameter Basic-Type a Basic-Type Statement-Expression assignment Member-Reference Method-Invocation Math Member-Reference abs int int b a a L arger AST edges Leaf edges Subtoken edges Data flow edges Non-leaves Leaves Subtoken nodes API nodes Statement-Expression assignment Member-Reference Method-Invocation Math Member-Reference abs b b public int getLarger(int a, int",
"2019b), and code2seq (Alon et al., 2019a).",
"Recently, GNN models have also been applied in code understanding.",
"Since the original AST is actually a tree that is sparse, these works (Allamanis et al., 2018b; Wang et al., 2020; Wang and Li, 2021) first add edges to AST to make it more connected and then apply GNN models.",
"Further, there are also works (Yu et al., 2020; Cummins et al., 2021; Liu et al., 2021) that utilize other intermediate graphs such as CFG, PDG and CPG (Yamaguchi et al., 2014).",
"Recently, approaches that use both code context and structure are proposed.",
"For example, Hellendoorn et al. (2020) and Zgner et al. (2021) incorporate the structure information derived from AST, such as edge weights and node distances, into the context attention computation in Transformer (Vaswani et al., 2017).",
"Despite the success, all these methods only consider the code context and structure information.",
"There are also approaches that utilize the external knowledge associated with codes.",
"For example, some methods apply pre-training techniques in NLP to boost comprehension, such as CodeBERT (Feng et al., 2020), GPT-C (Svyatkovskiy et al., 2020) and PLBART (Ahmad et al., 2021).",
"There are also works that incorporate code characteristics into pre-training models, such as Graph-CodeBERT (Peng et al., 2021), OSCAR (Peng et al., 2021) and InferCode (Bui et al., 2021).",
"Further, API is another external source for program understanding, which has been introduced in many works (Hu et al., 2018; Xu et al., 2020).",
"However, all these methods ignore considering human behaviors in program understanding.",
"In this paper, we focus on two program understanding downstream tasks: code summarization and code clone detection.",
"For code summarization, some works (Iyer et al., 2016; Ahmad et al., 2020) use code context only, some methods (LeClair et al., 2019; Alon et al., 2019a) use code structure only, while there are also models (Hellendoorn et al., 2020; Zgner et al., 2021) that use both information.",
"Further, Liu et al. (2021) introduce external knowledge for performance improvement.",
"For code clone detection, existing works mainly employ code structure (Wei and Li, 2017; Zhang et al., 2019; Wang et al., 2020) and pre-training models (Feng et al., 2020; Ahmad et al., 2021).",
"In this section, we construct S-AST.",
"The original AST has two main limitations: Low connectivity .",
"The original AST is actually tree-structured, where every two nodes are minimally connected with only one path.",
"This could lead to a long distance between leaf nodes.",
"As pointed out in (Alon and Ya-hav, 2021), directly applying GNN models in tree-shaped graphs could cause the long-range problem.",
"OOV problem .",
"User-defined identifiers in codes can be arbitrarily complex and most of them are compound words, which could induce a large vocabulary size.",
"For example, the training set size in the benchmark dataset CodeXGLUE (Lu et al., 2021) for code summarization is 164 , 814 , while the vocabulary size for AST nodes is 620 , 256 .",
"After we split the nodes by camel case and underscores (Cvitkovic et al., 2019), the vocabulary size is still as high as 201 , 286 .",
"A very large vocabulary could cause the OOV problem (Jean et al., 2015) and thus adversely affect the model performance.",
"To improve the connectivity of the AST, there exist some works (Allamanis et al., 2018b; Wang et al., 2020; Wang and Li, 2021) that add edges to the AST.",
"However, these methods cannot address the OOV problem.",
"Therefore, we propose a new code intermediate graph S-AST, as shown in Figure 1. Similar as in (Allamanis et al., 2018b; Wang et al., 2020), we add data flow edges to trace variable transitions and connect adjacent leaf nodes to encourage learning from contexts.",
"To solve the OOV problem, we further reduce the vocabulary size by using the tokenizer of RoBERTa (Liu et al., 2019) to tokenize every leaf node in the AST.",
"When a leaf node can be tokenized into multiple subtokens, we keep the first subtoken as the parent node and take other subtokens as its children.",
"For example, the token getLarger is divided into the parent node get and the children nodes L and arger.",
"These new parent-children connections are defined as subtoken edges.",
"With these three types of edges added, we increase the number of edges in the AST and improve the graph connectivity.",
"Further, the vocabulary size could be significantly reduced.",
"In our experiments, we use javalang 3 to generate Java AST and reduce the vocabulary size to 50 , 336 , where 50 , 265 is the size of original RoBERTa vocabulary and 71 is the number of keywords in non-leaf nodes defined by javalang.",
"In this section, we introduce the PGNN-EK model, which is composed of two main components.",
"On the one hand, the partitioning-based graph neural network model (PGNN) is proposed to follow the divide-and-conquer behaviours of humans to 3 https://github.com/c2nes/javalang understand programs.",
"On the other hand, PGNN-EK leverages external knowledge to enhance the model's capability.",
"The overall architecture of PGNN-EK is summarized in Figure 2. public int getLarger(int a, int",
"As illustrated in (Schulte et al., 2010) and (Park et al., 2016), the bottom-up reasoning on the hierarchical relationship of statements plays an essential role in human understanding.",
"Therefore, we propose a statement-based partitioning algorithm to divide S-AST into multiple subgraphs.",
"Since S-AST is no longer a tree, for convenience, we first keep subtokens and their edges in-between in S-AST, and remove edges linking variables and those connecting adjacent leaf nodes, to derive a tree structure.",
"After that, we calculate the number of nodes in each subtree of the root node and each subtree corresponds to a statement of the raw code.",
"Then, we accumulate the number of nodes in subtrees from left to right.",
"When the sum exceeds the pre-defined threshold , we group these subtrees into one subgraph and reset the sum to zero.",
"If the current subgraph is not the first one, for each variable node in it, we also add to the subgraph the closest node indicating the same variable in previous subgraphs to trace the variable transition.",
"After the subgraph is derived, we add edges between nodes that represent the same variable and also connect adjacent leaf nodes as in the original S-AST.",
"We repeat this process until all subtrees are visited.",
"Note that if the node number of the last subgraph is smaller than / 2 , we merge the last subgraph into the penultimate subgraph.",
"Finally, we summarize the pseudocodes of the partitioning algorithm in Alg.",
"1. After subgraphs are derived, as in (Hellendoorn et al., 2020), we adopt GGNN (Li et al., 2016) as the graph embedding model, which uses a multi-5145 layer perceptron (MLP) and a gated recurrent unit (GRU) to perform message passing and embedding updating.",
"Specifically, at the ( l + 1) -th layer, to update the embedding h l +1 i of node x i , we have: m l +1 i = (cid:88) j N i MLP ( h lj , e ij ) , h l +1 i = GRU ( m l +1 i , h li ) , where N i is the neighbor set of x i and e ij is the feature vector of the edge between x i and x j .",
"After node embeddings are generated, we use a READOUT function to obtain the graph embedding G : G = READOUT ( { h i } ) .",
"We repeat the above process on each subgraph to derive a list of subgraph embeddings L = [ G 1 , G 2 , , G n ] , where n is the number of subgraphs.",
"Next, we keep the order of the subgraph list and feed L into an unidirectional LSTM: O = LSTM ( L ) .",
"Inspired by the skip connection (He et al., 2016), we also perform GGNN on the whole S-AST graph to derive a code embedding C .",
"Finally, we concatenate C and the last output O [ 1] of LSTM.",
"We further feed the result into a fully connected layer to get the output code embedding E p : E p = FC ( Concat ( C , O [ 1])) .",
"To help understand programs, people often resort to external knowledge.",
"For example, humans usually learn from massive exemplary codes written by experts for better syntactic comprehension, which are in the format of programming language.",
"Further, API documentation is written in natural language and provides semantic details on functions.",
"Therefore, a research question arises: how to fuse these external syntactic and semantic knowledge into our model?",
"To address the problem, we use pre-training techniques in programming language processing (PLP), which are trained on massive code corpus to learn programming basics.",
"In particular, we adopt CodeBERT (Feng et al., 2020), which is a bimodal pre-trained model for both programming language and natural language.",
"syntactic information contained in the raw code, we perform pre-order traversal on the AST of the code to obtain a sequence of tokens and replace the raw code.",
"This is because the AST includes extra code-related information, such as statements, variables and operations.",
"Then we append the corresponding API description to the end.",
"A toy example of transformation is shown in Figure 3. Finally, we feed the transformed context T into the pre-trained CodeBERT 4 and obtain the embedding E e : E e = CodeBERT ( T ) .",
"Finally, we concatenate the output embeddings of PGNN and CodeBERT, and feed the result into a fully connected layer to obtain the final embedding E f : E f = FC ( Concat ( E p , E e )) .",
"In this section, we evaluate the performance of PGNN-EK.",
"We conduct experiments on two program understanding tasks: code summarization and code clone detection.",
"For each task, we use two benchmark datasets, whose statistics are listed in Table 1. 5.1 Implementation details In our experiments, we use the AdamW optimizer and linear schedule from (Wolf et al., 2020) to update model parameters.",
"For fair comparison, we run all experiments on 2 Tesla V 100 with 32 G memory.",
"For PGNN, we set the number of GNN layers, the number of LSTM layers, the embedding size of GNN node, and the embedding size of LSTM hidden layer to 3 , 2 , 768 and 768 , respectively.",
"We choose the mean operator as the READOUT function.",
"To avoid overfitting, we set the dropout rate to 0 .",
"2 in PGNN.",
"We implement GNNs 4 https://huggingface.co/microsoft/ codebert-base 5146 Table 1: The statistics of datasets Task Dataset Training Validation Test Description Code summarization CodeSearchNet-Java (CSN) 164,814 5,179 10,952 Provided by CodeXGLUE TL-CodeSum (TLC) 69,708 8,714 8,714 Original Code clone detection BigCloneBench (BCB) 901,028 415,416 415,416 Provided by CodeXGLUE BigCloneBench-Function (BCB-F) 398,110 78,602 81,202 Split by functionality based on PyTorch Geometric (Fey and Lenssen, 2019).",
"In the EK-enhanced component, we obtain 51 , 191 method-description pairs after preprocessing the API documentation 5 .",
"For pair examples, see Appendix B. In the code summarization task, we add a 6 -layer Transformer-based decoder to generate summarization as in CodeBERT.",
"We set learning rate to 0 .",
"00005 , batch size to 16 , training steps to 50 , 000 , maximum code length to 256 and maximum summarization length to 32 , respectively.",
"In the code clone detection task, as suggested by (Neculoiu et al., 2016), we double the PGNN-EK to a siamese neural network to calculate code similarity.",
"We set learning rate to 0 .",
"00005 , batch size to 4 , training steps to 200 , 000 and maximum code length to 400 , respectively.",
"Code summarization aims at generating natural language comments for codes.",
"We evaluate the performance of PGNN-EK on two benchmark datasets, which are TL-CodeSum (shorted as TLC) (Hu et al., 2018) and the Java subset of CodeSearchNet (shorted as CSN) (Husain et al., 2019).",
"For TLC, we use the original dataset.",
"For CSN, we use the version provided by CodeXGLUE (Lu et al., 2021).",
"For fair comparison, we use the smoothed BLEU-4 score (Lin and Och, 2004) as in CodeXGLUE.",
"The larger the score, the better the model performance.",
"We compare our model with five representative baselines, including CodeNN (Iyer et al., 2016), NCS (Ahmad et al., 2020), Rencos (Zhang et al., 2020), CodeBERT (Feng et al., 2020) and PLBART (Ahmad et al., 2021).",
"Due to the space limitation, we move the details of these baselines to Appendix C. Table 2 shows the code summarization results.",
"Note that the results of CodeNN, NCS and Rencos are directly taken from (Shi et al., 2021).",
"Also, the results of CodeBERT and PLBART on CSN are 5 https://www.oracle.com/java/ technologies/javase-jdk8-doc-downloads.html derived from the leaderboard of CodeXGLUE.",
"For their results on TLC, we run the codes released by the authors of the paper and set hyper-parameters according to the original paper.",
"From the table, we see that, due to the fusion of external knowledge, pre-training models CodeBERT, PLBART and PGNN-EK outperform other models on both datasets.",
"Further, PGNN-EK performs the best.",
"The gaps between PGNN-EK and the runner-up model PLBART on CSN and TLC are 0 .",
"5 and 1 .",
"05 , respectively.",
"This shows the importance of considering human behaviors for code comprehension.",
"We also observe that scores on TLC are substantially larger than that on CSN.",
"This is because codes in the training set and the test set of TLC are considerably more similar in functionalities, which will be elaborated in the next section.",
"The goal of code clone detection is to detect whether two code fragments implement the same functionality.",
"Following (Zhang et al., 2019; Wang et al., 2020), we use the BigCloneBench 2014 dataset (Svajlenko et al., 2014) and adopt the version provided by CodeXGLUE.",
"We short it as BCB.",
"Before we apply PGNN-EK on BCB, we notice from the leaderboard of CodeXGLUE that the results on BCB are incredibly high, where the mini-5147 mum F1 score is 0 .",
"949 .",
"Then we dive into the characteristics of the dataset and compare BCB with the original benchmark (Svajlenko et al., 2014).",
"We find that the functionalities of codes in the test set have all appeared in the training set of BCB.",
"Therefore, BCB is a very simple dataset.",
"To test the model's generalization ability, we construct a new dataset, named BCB-F, where the test set contains codes whose functionality has never appeared in the training set.",
"We first extract codes from the new version benckmark (Svajlenko and Roy, 2015) that has more code fragments and code functionalities.",
"We next split training/validation/test set based on code functionalities.",
"Specifically, we construct training/validation/test set with 22 / 11 / 10 code functionalities.",
"For details on the functionality splits of BCB and BCB-F, see Appendix D. We keep the same number of positive and negative samples in all the three sets.",
"The comparison between BCB and BCB-F is given in Table 3. Table 3: Comparisons between BCB and BCB-FBCB BCB-F Code fragments 9134 73182 Functionalities 10 43 Training/Test splitting random sample by functionality Ratio of positive-negative nearly 2:1 1:1 In addition to the pre-training models CodeBERT and PLBART, we further compare our model with two representative methods in code clone detection, which are ASTNN (Zhang et al., 2019) and FA-AST (Wang et al., 2020) (For the details of these baselines, see Appendix C).",
"Table 4 shows the evaluation results on the two datasets.",
"For BCB, we take the results of other baseline methods from CodeXGLUE 6 .",
"For BCB-F, we run the source codes released by their authors to obtain the results.",
"From the table, we observe: 1) All models perform very well on BCB, indicating that the dataset is very simple.",
"However, the best F1 score on BCB-F is only 0 .",
"724 , which shows that this dataset is very challenging.",
"2) The non-pre-training models ASTNN and FA-AST predict all samples to be positive and perform poorly on BCB-F, while pre-training models perform better.",
"This 6 Specifically, we take the results of ASTNN and FA-AST from https://github.com/ microsoft/CodeXGLUE/tree/main/Code-Code/Clone-detection-BigCloneBench and that of CodeBERT and PLBART from the CodeXGLUE leaderboard.",
"Note that PLBART only reports the F1 score on BCB.",
"further demonstrates the importance of introducing external knowledge.",
"3) PGNN-EK achieves the best results on both datasets.",
"This shows that considering human behaviors in program understanding enhances the generalization ability of PGNN-EK.",
"We further conduct ablation study to verify the importance of its main components in PGNN-EK, including subtokens, the S-AST graph, the partitioning-based GNN and the external knowledge.",
"Specifically, one variant employs only the S-AST graph without using external knowledge.",
"This helps us realize the importance of external knowledge in program understanding.",
"We call this variant PGNN only .",
"Meanwhile, we define another variant that ignores the hierarchical relationships in code structure and uses only external knowledge.",
"We call this variant EK only .",
"To further show the significance of S-AST in code understanding, we replace S-AST with the original AST in the variant PGNN-EK with AST .",
"We also implement a variant that does not use the subtoken tokenizer to generate extra subtoken nodes and edges.",
"We call it PGNN-EK without subtoken .",
"This variant can be used to show the importance of subtokens in addressing the OOV problem.",
"To show the advantage of the partitioning strategy, we propose a variant GNN-EK that discards the partitioning step.",
"Finally, we consider a variant that feeds the raw code into the pre-trained CodeBERT without transforming it with external knowledge.",
"We call this variant PGNN-CodeBERT .",
"Table 5 summarizes the ablation study results.",
"From the table, we see that: 1) S-AST contains richer information than AST and can serve as an effective code intermediate representation in program understanding.",
"The introduction of subtokens nodes and edges alleviates the OOV problem 5148 Table 5: Ablation study on PGNN-EK.",
"and enhances the model performance.",
"2) External knowledge helps boost understanding codes.",
"In particular, code transformation with external knowledge improves the expressiveness of the raw code.",
"3) The full model PGNN-EK outperforms other variants on all the datasets and tasks.",
"This indicates the importance of every main component in PGNN-EK.",
"It further shows that leveraging code context, code structure and external knowledge as humans is helpful for program understanding.",
"We end this section with a hyper-parameter sensitivity analysis.",
"In PGNN-EK there is a key hyper-parameter that is used to control the size of subgraphs.",
"Here, we investigate the sensitivity of .",
"We vary the value of from { 10 , 30 , 50 , 70 , 90 , 110 , 130 , 150 , 170 , 190 } , and the final prediction results of PGNN-EK on 4 datasets are shown in the Figure 4. Table 6: The average number of nodes in S-AST Datasets CSN TLC BCB BCB-FS-AST size 137 140 372 348 The results indicate that 1) the model performance first increases and then drops, with the increase of the subgraph size.",
"When the subgraph size is too small, each subgraph is a code fragment that no longer represents a code statement and thus contains less information.",
"Further, when the subgraph is too large, each subgraph could be composed of statements that are of different semantic meanings, which thus degrades the model performance.",
"2) PGNN-EK performs the best at = 30 on CSN and TLC while it achieves the best results at = 70 on BCB and BCB-F.",
"We further investigate the reason and show the average 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 1 5 .0 1 5 .5 1 6 .0 1 6 .5 1 7 .0 1 7 .5 1 8 .0 1 8 .5 1 9 .0 1 9 .5 2 0 .0 C S NS m o o t h e d B L E U -4 (cid:1) 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 4 7 .0 4 7 .5 4 8 .0 4 8 .5 4 9 .0 4 9 .5 5 0 .0 5 0 .5 5 1 .0 5 1 .5 5 2 .0 T L CS m o o t h e d B L E U -4 (cid:1) 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 0 .9 0 0 .9 1 0 .9 2 0 .9 3 0 .9 4 0 .9 5 0 .9 6 0 .9 7 0 .9 8 0 .9 9 1 .0 0 B C BF 1 (cid:1) 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 0 .6 0 0 .6 2 0 .6 4 0 .6 6 0 .6 8 0 .7 0 0 .7 2 0 .7 4 0 .7 6 0 .7 8 0 .8 0 B C B FF 1 (cid:1) Figure 4: The influence of subgraph size on 4 datasets.",
"number of nodes in S-AST on the four datasets in Table 6.",
"From the table, BCB and BCB-F contain 2 .",
"5 times more nodes than that in CSN and TLC.",
"This empirically suggests that setting to be about 15 to 14 of the average node number in S-AST could be a reasonable choice.",
"In this paper, we followed human understandings for programs and proposed the PGNN-EK model.",
"To enrich the code structure information and alleviate the OOV problem, we presented the S-AST graph based on AST, which uses a subtoken tokenizer to generate subtoken nodes and edges between them.",
"Inspired by the divide-and-conquer strategy, we proposed the partitioning-based graph neural network model on S-AST that employs code context and structure.",
"To leverage the external knowledge to boost comprehension, we transformed the raw code to fuse syntactic and semantic knowledge and utilized pre-training techniques for information extraction.",
"We performed extensive experiments to show the effectiveness of our model PGNN-EK on the code summarization and code 5149 clone detection tasks.",
"In particular, to show the generalization ability of the model, we released a new benchmark that is more challenging.",
"This work has been supported by the National Natural Science Foundation of China under Grant No.",
"U1911203, Alibaba Group through the Alibaba Innovation Research Program, the National Natural Science Foundation of China under Grant No. 61877018 and No.61977025, and Shanghai Pujiang Talent Program under Grant No. 21PJ1402900."
] | [
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"result",
"objective",
"other",
"other"
] |
[
"Neural language models are usually trained to match the distributional properties of large-scale corpora by minimizing the log loss.",
"While straightforward to optimize, this approach forces the model to reproduce all variations in the dataset, including noisy and invalid references (e.g., misannotations and hallucinated facts).",
"Even a small fraction of noisy data can degrade the performance of log loss.",
"As an alternative, prior work has shown that minimizing the distinguishability of generated samples is a principled and robust loss that can handle invalid references.",
"However, distinguishability has not been used in practice due to challenges in optimization and estimation.",
"We propose loss truncation: a simple and scalable procedure which adaptively removes high log loss examples as a way to optimize for distinguishability.",
"Empirically, we demonstrate that loss truncation outperforms existing baselines on distinguishability on a summarization task.",
"Furthermore, we show that samples generated by the loss truncation model have factual accuracy ratings that exceed those of baselines and match human references.",
"Learning to generate text is a core part of many NLP tasks, including summarization (Nallapati et al., 2016), image captioning (Lin et al., 2014), and story generation (Roemmele, 2016).",
"A common challenge to all these tasks is that references from the training distribution are not unique and contain substantial variations in phrasing and content (Wiseman et al., 2017; Dhingra et al., 2019).",
"Learning to generate under a set of diverse and noisy references is challenging as some variations ought to be learned (e.g., paraphrasing) while others should not (e.g., hallucinated facts, ignoring prompts).",
"match the underlying distribution, leading to models that replicate and sometimes even amplify unwanted behaviors such as hallucination during generation.",
"For example, neural language models often produce fluent text that is unfaithful to the source (Tian et al., 2019; Wiseman et al., 2017; Lee et al., 2018).",
"Existing work (Fan et al., 2018; Holtzman et al., 2019) has primarily addressed these issues by constructing decoders that implicitly remove unwanted variation when generating (see 6 for a detailed discussion of task-specific losses).",
"In this work, we argue that this phenomenon is not model specific, but is due to the widely-used log loss: we demonstrate that log loss is not robust to noisy and invalid references (2).",
"In particular, log loss requires that models assign probabilities to all potential test reference sequences.",
"As a result, log loss is sensitive to outliers: invalid or noisy references with small probability mass can cause large changes in model behavior.",
"We show that the brittleness of log loss, together with the noise in existing generation datasets, lead to low-quality and unfaithful generated text.",
"Instead of optimizing log loss, which has little correlation with model output quality (Theis et al., 2016; Hashimoto et al., 2019; Gamon et al., 2005), recent work on diverse generation models has proposed optimizing for the distinguishability of samples from the model and the reference.",
"Distinguishability provides a natural and appealing guarantee: samples that are indistinguishable from human generated text will be as high quality as human generated text.",
"Furthermore, we show that optimizing for distinguishability is robust in the face of noisy and even invalid data.",
"Despite its appeal, distinguishability has not been widely used due to statistical and computational challenges.",
"For example, existing methods that directly optimize for distinguishability have yet to match even naive log loss based baselines (Caccia et al., 2018).",
"We propose a modification to the log loss, loss truncation , that has the benefits of distinguishability while being efficient to train.",
"Loss truncation is as efficient to train as log loss, nearly as robust as distinguishability, and provides distinguishability guarantees via an upper bound.",
"It achieves these properties by modifying the standard log loss to adaptively remove examples with high log loss.",
"We additionally extend loss truncation with a sequence-level rejection sampling scheme that generates higher quality sequences by restricting the outputs to be high probability sequences.",
"We show that loss truncation with direct and rejection sampling outperforms standard log loss based generation methods (beam search, full sampling, topk , and topp sampling) on distinguishability, as measured by the HUSE score (Hashimoto et al., 2019).",
"We additionally study the factual accuracy of a summarization system trained on loss truncation and show that our proposed approach produces summaries which improve upon all baselines (including beam searched models) and match references on factual accuracy.",
"Task and Background.",
"We consider a natural language generation task with a conditional language model , where we are given a context x drawn from p ( x ) and our probabilistic model p ( y | x ) produces an output y by approximating a (usually human) reference distribution p ref ( y | x ) .",
"In order to achieve this, many existing models are trained to minimize the Kullback-Leibler (KL) divergence, KL ( p ref || p ) = E p ref [log p ] (cid:124) (cid:123)(cid:122) (cid:125) log loss + E p ref [log p ref ] (cid:124) (cid:123)(cid:122) (cid:125) negentropy .",
"We refer to the first term of this divergence as the log loss of a model.",
"The second term is commonly ignored as it is a constant with respect to the model.",
"Minimizing the log loss has several practical benefits: 1) it is written as an expected loss (and is thus straightforward to optimize via stochastic gradient descent), 2) it factorizes across tokens in autoregressive modeling, and 3) it provides a guarantee on a model's goodness of fit (Eq (1)).",
"Unfortunately, log loss also suffers from several drawbacks.",
"It is known to have little correlation with a model's sample quality and it can be brittle to invalid references in the training data.",
"Log loss is not robust to noise.",
"The KL divergence has intuitively correct behavior when each input x has a single correct reference y : it will maximize the probability of the single correct reference.",
"However, log loss can be problematic when there are multiple correct references, of which some are invalid or difficult to model.",
"In particular, log loss is sensitive to invalid or noisy data because it requires that the model assign high probabilities to all potential references.",
"Log loss is unbounded above: a model assigning zero probability to even a single reference makes the model incur an infinite overall loss.",
"We show a well-known example of this behavior with synthetic data.",
"We consider fitting a single Gaussian to a mixture of two Gaussian in Figure 1. The reference distribution (blue) has a valid set of references at zero as well as variation that the model does not expect (e.g., invalid or noisy references) on the right.",
"Minimizing the log loss results in a suboptimal model that is forced to span both groups.",
"Furthermore, post-hoc processing the model does not help, as even the most likely output under the log loss trained model (~3) has low probability under the reference distribution.",
"In natural language generation, training sets can contain invalid or poor quality references.",
"As such, these types of problems manifest themselves in tasks such as summarization (hallucinat-ing facts), story generation (ignoring prompts and constraints), and captioning (ignoring parts of the image).",
"Much of the existing literature on faithful generation has focused on designing better models for valid references (via copying or attention con-straints), but the example in Figure 1 shows that this alone may not be sufficient.",
"The Gaussian model' in this case perfectly fits the mixture component Context: For the first time in five years, Microsoft corp. is finally unveiling a new system for operating personal computers.",
"Title: Microsoft Makes Long-Awaited Software Upgrade Available to Businesses Thursday.",
"at zero but is still brittle because it cannot simultaneously fit the other group of (invalid) samples.",
"Resolving this will require either a model which is designed explicitly to capture invalid references or a loss function that can ignore them.",
"Case Study: Hallucination in Summarization We show that low-probability reference sequences (e.g., Figure 1) are pervasive by examining the Gigaword summarization dataset (Rush et al., 2017).",
"We manually classified 300 titles into two categories: 1) requires hallucinating new facts and 2) directly entailed from the context.",
"We show an example of a reference that requires hallucination in Figure 2. In this example, a model that assigns high probability to the new fact (Thursday) must also frequently hallucinate dates on other examples.",
"We show the fraction of examples in each category in Table 1. As shown, 35% of titles require hallucinating new facts.",
"Others have found this phenomenon to be pervasive in other datasets (Kryscinski et al., 2019), including the CNN/DM dataset (See et al., 2017).",
"Studying the log loss of these examples 1 , we note that the average log loss of titles that require new facts is over 1.7 the average loss of the titles that are directly entailed (Table 1) and the high-loss examples are clearly dominated by examples which require hallucination (Figure 3).",
"In fact, we find that over 80% of examples with greater than 40 log loss requires some form of hallucination.",
"These statistics are similar to the toy example we presented earlier in Figure 1. A small but nontrivial fraction of invalid and unexpected data force the model to incur high losses.",
"Much like in the earlier example, we can see that a model which aims to have low log loss on this dataset must spend a substantial amount of effort learning to hallucinate.",
"will inevitably contain annotation errors and noise, we might ask whether there are effective alternatives to the KL divergence for training models.",
"The distinguishability of samples from a model compared to the reference is one such objective.",
"Distinguishability has recently gained attention as a way to learn and evaluate models based on both sample quality and diversity (Hashimoto et al., 2019; Zhou et al., 2019; Zellers et al., 2019; Gehrmann et al., 2019).",
"We show that this objective also serves as a naturally robust alternative to the KL divergence for learning language models.",
"Unfortunately, directly optimizing for distinguishability (e.g., via generative adversarial networks) is challenging (Caccia et al., 2018) and we show this works poorly in practice (5).",
"Distinguishability is defined as the error rate of an optimal classifier which seeks to distinguish samples from both the model and reference, and we will formally define this via the mixture y | x, z (cid:40) p ref ( y | x ) if z = 1 p ( y | x ) if z = 0 where z Bernoulli (cid:0) 12 (cid:1) .",
"We can now define L to be twice the optimal error in identifying samples from the model L := 2 inf f XY [0 , 1] P [ f ( x, y ) (cid:54) = z ] (2) Our measure of distinguishability, the total variation (TV) distance , is a linear function of this error | p p ref | TV = 1 L where p and p ref refer to the joint distributions p ( y | x ) p ( x ) and p ref ( y | x ) p ( x ) for brevity.",
"Note that distinguishability is inherently robust to the addition of any small fraction of noisy data (Donoho et al., 1988).",
"Unlike the log loss, the model's loss on an example for TV is upper bounded by 1 (Eq 2).",
"We show an example of TV's robustness in Figure 1, where a small amount of noise does not substantially affect the learned distribution.",
"Log loss as a surrogate for distinguishability.",
"Distinguishability is both robust and provides sample quality guarantees, but is challenging to optimize (Caccia et al., 2018).",
"One approach to optimize for distinguishability is to find an appropriate surrogate loss which serves as an upper bound.",
"This is analogous to the use of logistic or hinge losses as a way to optimize for classification accuracy.",
"For log loss, Pinsker's inequality (Csiszar and Korner, 2011) relates the KL divergence and distinguishability as | p p ref | 2 TV 1 2 KL ( p ref || p ) .",
"This explains the empirical success of log loss in low-uncertainty situations, where KL is sufficiently small and this bound becomes tight.",
"Our approach will be to modify the log loss slightly by truncating the distribution.",
"This truncated loss will be as easy to optimize as log loss, while being more robust and providing a tighter variant of Pinsker's inequality.",
"Intuition.",
"We would like the model to ignore data that would force it to unnecessarily hallucinate at test time.",
"Concretely, recall the toy example (Fig-ure 1); there is a set of invalid references that force the model to be degenerate.",
"If we could remove these these invalid references by truncating the distribution, the resulting model would be high quality.",
"We can show that this intuition is theoretically jus-tified, and that truncating (i.e., removing) an appropriate c -fraction of the data provides tighter bounds on the distinguishability of the model.",
"Improved log losses for distinguishability.",
"We will demonstrate that log loss with an appropriate c -fraction of the data removed provides guarantees on distinguishability.",
"We will define the set of truncated distributions as the set of distributions with any c -fraction of data removed P c,p := { q 0 : p = (1 c ) q 0 + cq 1 for some q 1 } .",
"A simple lemma shows that that all elements in P c,p are c -close to p in TV (Appendix B).",
"Now we state our main result, Proposition 1. For any c [0 , 1] and p t P c,p ref , | p p ref | 2 TV 1 2 KL ( p t || p ) + 2 c + c 2 See Appendix B for the proof.",
"Namely, distinguishability is bounded by the log loss with respect to the truncated distribution and a small constant.",
"Furthermore, this upper bound is valid for any c , although different c will change the tightness of the bound and produce different models.",
"This truncated bound can be substantially tighter than Pinsker's inequality.",
"Consider for example a model that can perfectly capture (1 c ) fraction of the data, but c -fraction of the reference outputs cannot be generated by the model and receive probability zero.",
"In this case, the distinguishability (TV) is c , the KL divergence is infinite , while our truncated bound is c 2 + 2 c .",
"This suggests that appropriately truncating high-loss examples makes log loss robust and allows us to use log loss as a surrogate for distinguishability, even in the presence of invalid and noisy references.",
"Loss truncation.",
"Given that the log loss on any c -fraction of the data is a surrogate loss for distinguishability (Eq (6)), a key parameter to optimize is the truncated distribution p t .",
"An oracle solution would exhaustively search over p t and which data to drop.",
"However, exhaustively searching through P c,p ref is a combinatorial optimization problem and infeasible.",
"Our approach will be to optimize p t with a heuristic.",
"The truncated objective takes the form of a log loss and negative entropy term, E p t [log p ( y | x )] + E p t [log p t ( y | x )] and we will select p t by dropping the examples with the highest log loss, treating the negative entropy term as being upper bounded by zero.",
"This heuristic is straightforward to compute, provides an upper bound on distinguishability, and 0 1 2 3 4 5 6 0 2 4 Pinsker's Loss-truncated (ours) TV^2 Figure 4: Pinsker's inequality, our bound, and the total variation squared of parameter estimates for different parameter estimates ( c = 0 . 2 ).",
"As an example of how our heuristic can improve estimation and tightness in bounds, consider the earlier toy example in Figure 1. In this example, we find the optimal mean for a single Gaussian with fixed variance which fits mixture of two Gaussians.",
"Figure 4 shows the objective function value implied by the TV loss, log loss (Pinsker's bound), and our c -truncated bound as a function of the Gaussian mean.",
"We find that log loss provides an upper bound on distinguishability (via Pinsker's inequality) but is loose and results in a low quality estimate.",
"In contrast, c -truncation results in a nearly identical minimizer as directly minimizing TV.",
"Our algorithm has three components at training time.",
"First, it trains a model on all the data using standard hyperparameters, which we refer to as hotstarting the model.",
"Second, it tracks a running estimate of the 1 c quantile of the losses during training.",
"Third, it performs gradient updates on examples that are below the current 1 c quantile estimate.",
"We present the pseudocode in Algorithm 1 and describe each step in detail below.",
"2 Hotstarting.",
"First, our algorithm hotstarts the model ( hotstart ( M ) in Alg.",
"1) by training with the standard log loss.",
"Hotstarting address two challenges in optimizing the truncated loss.",
"First, losses are uninformative at the start of training so trun-2 Our code is available at https://github.com/ ddkang/loss_dropper .",
"cating examples based on these losses will result in dropping valid examples.",
"We have empirically found that truncating after hotstarting primarily drops invalid references, which avoids this problem.",
"Second, hotstarting allows the model to transfer information from the entire dataset to the clean 1 c fraction of the data.",
"Examples that cause a model to hallucinate may still contain valid information about the fluency of a sentence, which hotstarting can capture.",
"This is effectively pretraining our model on the entire data before learning to generate on the clean subset.",
"We have found this procedure to be effective in practice.",
"Quantile estimation.",
"Second, our algorithm keeps track of the 1 c quantile over the distribution of losses.",
"For each new minibatch B , we update an online estimate of the 1 c quantile ( estimateQuantile ( M, B ) in Alg.",
"1).",
"To estimate this quantile, our algorithm constructs a histogram over the last 10,000 examples seen during training and estimates the empirical 1 c quantile every 10,000 examples.",
"3 Loss dropping.",
"Third, our algorithm will perform minibatch stochastic gradient descent while excluding examples that have losses above the current top 1 c quantile estimate q ( truncatedUpdate ( M, B, q ) in Alg.",
"1).",
"Dropping can be accomplished in automatic differentiation packages (e.g., Tensorflow and PyTorch) by setting the loss on the given example to zero.",
"Thus far, our goal has been to robustly learn the underlying distribution.",
"However, in some cases, a user may wish to only generate high confidence sequences, which will ideally correspond to high quality sequences.",
"To generate such samples, we propose sequence-level rejection sampling .",
"Recall that our truncation heuristic selects for the 1 c quantile of the distribution.",
"For a user-defined level , our rejection sampling scheme will aim to generate samples from the 1 c quantile.",
"To perform rejection sampling, given a model and a user-defined rejection level , we first sample N sequences (e.g., titles in a summarization task).",
"Then, we sample a random sequence from the N smallest samples as measured by log loss.",
"Ideally, 3 For datasets with fewer than 10,000 examples, we can perform this procedure over the entire dataset.",
"Data: Model M , c fraction to drop, T iterations M hotstart ( M ) ; for i 0 to T do B minibatch() ; q = estimateQuantile ( M, B ) ; M = truncatedUpdate ( M, B, q ) ; endAlgorithm 1: The proposed loss truncation procedure with three components (see main text for details for each component).",
"this procedure will return a sample in the 1 c quantile of p ref .",
"We show that rejection sampling can outperform baselines in generating factual summaries (5).",
"We further show examples of selected and rejected samples in Appendix A. 5 Evaluation 5.1 Experimental Setup Dataset and Task.",
"We primarily evaluate loss truncation on abstractive summarization in the form of generating news headlines from an article.",
"We selected this task to highlight that loss truncation can improve sample quality and factual accuracy, while also achieving the secondary goal of diversity for abstractive systems (See et al., 2017; Kryscinski et al., 2019).",
"We evaluated on the Gigaword summarization task (Rush et al., 2017) as in Gehrmann et al. (2018).",
"While there are other summarization datasets, we chose Gigaword for the following reasons.",
"First, it is large enough that sample quality defects are not caused by a lack of data.",
"Second, the dataset is structured so that neither model nor computation is the bottleneck in performance: the standard sequence-to-sequence models are competitive on the Gigaword dataset.",
"Third, while Gigaword dataset is known to have noise, this matches the behavior of existing annotation errors (Beigman and Klebanov, 2009; Klebanov and Beigman, 2010) and uncertainty (Kryscinski et al., 2019).",
"To show that loss truncation is applicable beyond summarization, we also performed a preliminary evaluation of our approach on the E2E NLG task.",
"In E2E, the goal is to generate restaurant reviews from meaning representations (Dusek et al., 2019).",
"Model and Baselines.",
"We used a standard LSTM architecture with global attention for summarization that has been used for the Gigaword summarization task in the past (Gehrmann et al., 2018).",
"The learning rate and hyperparameters are given in Appendix C. For the E2E task, we use a standard model with the exact settings as in Puzikov and Gurevych (2018).",
"For loss truncation on Gigaword, we used c = 0 .",
"6 .",
"We matched the total number of training steps when training via loss truncation (including the hotstart) and standard log loss.",
"We sampled from the full model distribution for loss truncated models except when rejection sampling.",
"As baselines on Gigaword, we generate from the log loss trained language model using several decoders that have been reported to mitigate low-quality outputs such as beam search, topk sampling (Fan et al., 2018), and topp sampling (Holtz-man et al., 2019).",
"We also evaluate directly sampling from the probabilistic model in order to estimate overall distinguishability and understand the diversity-quality trade-offs of each model.",
"Finally, on Gigaword, we also compared against a recent generative adversarial network (GAN) model with a publicly available implementation (Wang and Lee, 2018).",
"whether loss truncation improves model distinguishability on summarization by measuring the HUSE estimator for TV (Hashimoto et al., 2019).",
"HUSE measures distinguishability by learning a classifier over the log-probabilities and human evaluation scores over both samples from the model and references.",
"We also use HUSE to evaluate the quality-diversity tradeoffs of the models by estimating both HUSE-Q (which measures quality via human judgement) and HUSE-D (which measures diversity via statistical evaluation).",
"In order to assess whether this leads to improvements in the faithfulness of samples, we measure whether loss truncation reduces the number of factually inaccurate outputs from the model via a crowdsourced survey.",
"We designed our prompt based on earlier factual accuracy human evaluation (Novikova et al., 2017) and measured whether the original article contained all of the information given in the generated title.",
"Automated metrics.",
"While human evaluation is our primary metric of evaluation as it is considered gold-standard, we additionally evaluate on Loss trunc.",
"automated metrics to contextualize our human evaluation results.",
"We measure ROUGE-L (Lin and Hovy, 2003) for summarization and BLEU score (Papineni et al., 2002) for E2E.",
"Using the HUSE score to measure the TV distance, we assessed whether loss truncation successfully improved our model in terms of distinguishability compared to log loss.",
"As shown in Table 2, loss truncation outperforms all baselines on HUSE score (including the original log loss model Full samp ), suggesting the truncated model is a better language model than the log loss model as measured by distinguishability.",
"We find that that loss truncation improves over the log loss by increasing the generation quality (HUSE-Q) by 12% without substantially lowering diversity (e.g., memorizing examples from the training set).",
"These results affirmatively answers an open question posed by Hashimoto et al. (2019) on whether it is possible to obtain models that improve the quality while maintaining overall distinguishability compared to log loss trained models.",
"Post-hoc modification of the log loss model's distribution by removing unlikely words using either topk or topp sampling result in substantial losses in HUSE due to losses in diversity.",
"We further considered matching the entropy of the loss truncation model with topk = 100 and topp = 0 .",
"9 (Appendix C).",
"At a fixed entropy, loss truncation can outperform on HUSE by up to 26%.",
"Comparing models with high sample quality, loss truncation with rejection sampling improves upon all baselines (including beam search) in terms of raw human quality evaluation (HUSE-Q), and we see that the Pareto frontier of truncation and rejection sampling (which can be achieved via ensem-bling) dominates the baselines on both quality and diversity (Figure 5).",
"Rejection sampling decreases overall HUSE score because it is designed to only return high quality samples (i.e., high HUSE-Q): this comes at the cost of reduced diversity, so overall HUSE score suffers.",
"The results amongst our baselines recapitulate known results for the quality-diversity tradeoffs of existing methods.",
"Beam search has high sample quality, but low diversity; topk and topp samplers provide diversity gains over beam search; and GANs generally underperform well-tuned log loss based models on both diversity and quality.",
"We now ask whether improvements in distinguishability (as measured by HUSE) for the loss truncation model translate to practical improvements in sample quality, such as the factual accuracy of generated outputs in summarization.",
"We evaluate this through a crowdsourced study on factual accuracy.",
"Since we are interested in studying whether our model can produce high quality samples, we used rejection sampling with = 0 .",
"1 to obtain high-quality samples from the model.",
"We compare this to the log loss model with baseline decoders.",
"For the topp and topk sampling decoders that have quality-diversity tradeoffs, we select k and p such that the entropy of the sampling distribution matches our rejection sampling approach (see Appendix C for details).",
"To measure factual accuracy, we asked crowd workers how much information in the generated titles was contained in the article in a similar fashion to Novikova et al. (2017).",
"Table 3 shows the Condition Mean score Human 3.63 0.05 Truncation + Rejection ( = 0 . 1 ) 3.79 0.06 Beam 3.51 0.05 topp ( p = 0 . 4 ) 3.42 0.05 topk ( k = 2 ) 3.29 0.05 Sampling 2.96 0.05 Table 3: Mean scores and standard errors of factuality in generated news titles given articles.",
"average factual accuracy rating for each model.",
"We find that rejection sampling outperforms all baselines, including the current gold standard of beam search, and matches the human reference level of factual accuracy.",
"Although it may seem surprising that loss truncation and rejection sampling together can achieve the same factual accuracy score as humans, recall that over 34% of the dataset consists of titles which have facts that are not contained in the article.",
"The loss truncation approach biases the model towards learning only the easily predicted (and likely factually accurate) titles.",
"Finally, one of the benefits of optimizing for distinguishability is that it naturally optimizes for both diversity and quality.",
"Manually examining outputs from the models, we find that directly sampling from the loss truncated model often produces high quality and diverse outputs.",
"We show examples of generated outputs for baselines and loss truncation in Table 4.",
"Loss truncation uses different phrasings (at least # killed', and floods sweep') while topk follows a nearly templated pattern with a few changes to the words which appear.",
"Topp and direct sampling both have diverse phrasings, but also hallucinate facts (earthquake' in sampling and torrential rains' in topp sampling).",
"While our primary evaluation metrics are human evaluations (HUSE and factuality), we additionally investigate automated metrics to further contextualize our results.",
"For summarization, we used ROUGE-L and for E2E we use BLEU score for the automated metrics.",
"For summarization, the ROUGE-L scores for loss truncation and entropy-matched topk and top-p decoding were 23.2, 22.8, and 22.8 respectively.",
"While loss truncation does not substantially improve ROUGE-L, we see that it still outperforms baselines.",
"We do not expect reference-based evaluations to fully capture the benefits of loss truncation, as these metrics encourage the models to fully imitate the data distribution including invalid and hallucinated examples.",
"For E2E, the BLEU scores for loss truncation and the baseline were 0.72 and 0.64 respectively.",
"We confirmed that the baseline model for the E2E task achieves a similar score as reported by Bal-akrishnan et al. (2019).",
"Perhaps surprisingly, improving BLEU score to 0.72 almost closes the gap to using complex tree-structured semantic representations, which achieves a BLEU score of 0.74 (Balakrishnan et al., 2019).",
"We further show that loss truncation is not sensitive to the hyperparameter c on automated metrics in Appendix E.1 and provide a preliminary investigation of combining loss truncation and alternative decoders in Appendix E.2.",
"Decoder-based diversity.",
"Researchers have proposed a variety of models for text generation (Rad-ford et al., 2019; Keskar et al., 2019; Sutskever et al., 2014).",
"These models generate text using decoding methods such as beam search.",
"While beam search is generally thought of as the gold standard (Tillmann and Ney, 2003), it can produce generic and repetitive outputs (Holtzman et al., 2019).",
"To achieve diversity, topk (Fan et al., 2018) and topp (Holtzman et al., 2019) sampling stochastically decodes the outputs after restricting the output space to avoid low-quality outputs.",
"While these techniques can improve generation quality, they rely on models trained via log loss, which we show can result in undesired behavior that cannot be fixed post-hoc.",
"Our work is complementary to existing work on decoders by proposing a loss that can improve the probabilistic models which these decoders operate on.",
"Loss modifications.",
"Prior work has identified specific issues in generative models, such as repetitiveness, and proposed loss modifications to address these specific issues in the context of long text generation (Welleck et al., 2019; Holtzman et al., 2018).",
"In contrast, we identify an issue with the widely used log loss, and propose loss truncation, which does not require a taskand issue-specific Method Example Context at least ## people have been killed and more than ##,### made homeless by floods that swept across southern africa in the past week , striking a region already grappling with severe food shortages .",
"modification.",
"Many of the penalties and decoding techniques proposed in these earlier works can be combined with truncated log loss to obtain models that are more robust to noisy references.",
"Contemporaneous with our work, Tian et al. (2019) propose an attention weight approach to improving generation faithfulness via decoder and loss modifications.",
"Our work complements this by providing a conceptual basis for improving faithfulness by ignoring examples (i.e., optimizing distin-guishability), and providing a simple and general loss.",
"We consider complex, model dependent loss truncation methods for optimizing distinguishability to be exciting future work.",
"Other generation methods optimize for task-specific losses (Och, 2003; Shen et al., 2015).",
"Task specific losses are not known in many cases and thus we require an effective task-agnostic loss, e.g., log loss or TV.",
"We show that TV acts as a useful task-agnostic goodness of fit measure, and we provide an improved alternative to log loss.",
"GANs.",
"GANs have been proposed to learn models that minimize distinguishability (Li et al., 2017; Ra-jeswar et al., 2017; Dai et al., 2017).",
"While GANs have been successful in generating images (Good-fellow et al., 2014; Brock et al., 2018), GANs remaining challenging to optimize for text due to the discrete nature of text.",
"Our findings match earlier reports that GANs underperform log loss trained sequence-to-sequence models (Caccia et al., 2018).",
"In this work, we show that better training methods for distinguishability can arise from modifying the standard log loss via truncation.",
"Robust learning.",
"Robust learning is the study of learning in the face of outliers (Tukey, 1960; Donoho, 1982; Huber, 1992).",
"Our work is related to the (cid:15) -contamination model, in which an (cid:15) fraction of the data has been modified, potentially by an adversary (Diakonikolas et al., 2018).",
"Our work shows that robust learning under log loss can result in improved empirical performance and bounds on distinguishability.",
"While there are a number of effective approaches to robust learning (Diakonikolas et al., 2018; Fischler and Bolles, 1981), we focus on a simple truncation procedure as it is one of the only procedures scaleable enough to apply on large-scale generation datasets.",
"Our work shows that more effective, scalable robust learning procedures can help improve natural language generation methods.",
"In this work, we show that log loss is not robust to noise, which can in turn cause undesired behavior, such as hallucinating facts in summarization.",
"In response, we propose loss truncation, a robust training method that optimizes for distinguishability of generated samples.",
"We additionally propose a sequence-level rejection sampling scheme to generate high quality sequences.",
"We show that loss truncation outperforms a range of baselines (includ-ing beam search, topp , topk , and full sampling) on distinguishability.",
"We additionally show that rejection sampling outperforms all baselines, including beam search, on generating factual summaries.",
"These results suggest that robust learning in the form of truncating the log loss can complement model-based approaches to faithful generation by ignoring invalid and undesired references."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"result",
"objective",
"objective",
"result",
"result",
"abstain"
] |
[
"We propose SentiBERT , a variant of BERT that effectively captures compositional sentiment semantics.",
"The model incorporates contextualized representation with binary constituency parse tree to capture semantic composition.",
"Comprehensive experiments demonstrate that SentiBERT achieves competitive performance on phrase-level sentiment classification.",
"We further demonstrate that the sentiment composition learned from the phrase-level annotations on SST can be transferred to other sentiment analysis tasks as well as related tasks, such as emotion classification tasks.",
"Moreover, we conduct ablation studies and design visualization methods to understand SentiBERT .",
"We show that SentiBERT is better than baseline approaches in capturing negation and the contrastive relation and model the compositional sentiment semantics.",
"Sentiment analysis is an important language processing task (Pang et al., 2002, 2008; Liu, 2012).",
"One of the key challenges in sentiment analysis is to model compositional sentiment semantics.",
"Take the sentence Frenetic but not really funny. in Figure 1 as an example.",
"The two parts of the sentence are connected by but , which reveals the change of sentiment.",
"Besides, the word not changes the sentiment of really funny .",
"These types of negation and contrast are often difficult to handle when the sentences are complex (Socher et al., 2013; Tay et al., 2018; Xu et al., 2019).",
"In general, the sentiment of an expression is determined by the meaning of tokens and phrases and the way how they are syntactically combined.",
"Prior studies consider explicitly modeling compositional sentiment semantics over constituency structure with recursive neural networks (Socher et al., 2012, (cid:2)(cid:3)(cid:4)(cid:5)(cid:5)(cid:6) (cid:7)(cid:8)(cid:9)(cid:9)(cid:6) (cid:10) (cid:9)(cid:11)(cid:12) (cid:13)(cid:8)(cid:12) (cid:14)(cid:2)(cid:3)(cid:9)(cid:3)(cid:12)(cid:15)(cid:16) (cid:17)(cid:3)(cid:18)(cid:4)(cid:12)(cid:15)(cid:19)(cid:3) (cid:17)(cid:3)(cid:8)(cid:12)(cid:2)(cid:4)(cid:5) (cid:20)(cid:11)(cid:21)(cid:15)(cid:12)(cid:15)(cid:19)(cid:3) Figure 1: Illustration of the challenges of learning sentiment semantic compositionality.",
"2013).",
"However, these models that generate representation of a parent node by aggregating the local information from child nodes, overlook the rich association in context.",
"In this paper, we propose SentiBERT to incorporate recently developed contextualized representation models (Devlin et al., 2019; Liu et al., 2019) with the recursive constituency tree structure to better capture compositional sentiment semantics.",
"Specifically, we build a simple yet effective attention network for composing sentiment semantics on top of BERT (Devlin et al., 2019).",
"During training, we follow BERT to capture contextual information by masked language modeling.",
"In addition, we instruct the model to learn composition of meaning by predicting sentiment labels of the phrase nodes.",
"Results on phrase-level sentiment classification on Stanford Sentiment Treebank (SST) (Socher et al., 2013) indicate that SentiBERT improves significantly over recursive networks and the base-(cid:2) (cid:2) (cid:3) (cid:2) (cid:3) (cid:2) (cid:2) (cid:2) (cid:3) (cid:2) (cid:4)(cid:4)(cid:4)(cid:4)(cid:4)(cid:4) (cid:2)(cid:3)(cid:3)(cid:4)(cid:5)(cid:3)(cid:6)(cid:7)(cid:5) (cid:2)(cid:3)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10)(cid:11)(cid:7)(cid:8)(cid:2)(cid:4)(cid:7)(cid:11)(cid:12)(cid:13)(cid:14)(cid:12)(cid:10)(cid:15) (cid:4)(cid:4)(cid:4)(cid:4)(cid:4)(cid:4) (cid:16)(cid:10)(cid:17)(cid:18)(cid:10)(cid:6)(cid:12)(cid:14)(cid:12)(cid:10)(cid:15) (cid:2) (cid:4) (cid:3) (cid:4) (cid:2) (cid:2) (cid:3) (cid:4) (cid:3) (cid:4) (cid:4)(cid:4)(cid:4)(cid:4)(cid:4)(cid:4) (cid:19)(cid:10)(cid:20)(cid:7)(cid:15)(cid:8)(cid:21)(cid:15)(cid:18)(cid:22)(cid:14) (cid:8)(cid:7)(cid:9)(cid:10)(cid:11)(cid:4)(cid:12)(cid:13) (cid:8)(cid:7)(cid:9)(cid:10)(cid:11)(cid:4)(cid:12)(cid:13)(cid:13) (cid:8)(cid:7)(cid:9)(cid:10)(cid:11)(cid:4)(cid:12)(cid:13)(cid:13)(cid:13) (cid:23)(cid:5)(cid:24)(cid:7)(cid:4)(cid:8)(cid:25)(cid:26) (cid:23)(cid:5)(cid:24)(cid:7)(cid:4)(cid:8)(cid:27)(cid:26) (cid:2)(cid:2)(cid:2)(cid:2)(cid:2)(cid:2) (cid:2)(cid:2)(cid:2)(cid:2)(cid:2)(cid:2) (cid:5)(cid:6) (cid:2) (cid:3) (cid:5)(cid:6) (cid:7)(cid:3) (cid:2) (cid:2) (cid:7)(cid:3) (cid:5)(cid:6) (cid:2) (cid:3) (cid:5)(cid:6) (cid:7)(cid:3) (cid:2) (cid:2) (cid:7)(cid:3) (cid:28)(cid:29)(cid:30)(cid:19) Figure 2: The architecture of SentiBERT .",
"line BERT model.",
"As phrase-level sentiment labels are expensive to obtain, we further explore if the compositional sentiment semantics learned from one task can be transferred to others.",
"In particular, we find that SentiBERT trained on SST can be transferred well to other related tasks such as twitter sentiment analysis (Rosenthal et al., 2017) and emotion intensity classification (Mohammad et al., 2018) and contextual emotion detection (Chatter-jee et al., 2019).",
"Furthermore, we conduct comprehensive quantitative and qualitative analyses to evaluate the effectiveness of SentiBERT under various situations and to demonstrate the semantic compositionality captured by the model.",
"The source code is available at https://github.com/ WadeYin9712/SentiBERT .",
"Sentiment Analysis Various approaches have been applied to build a sentiment classifier, including feature-based methods (Hu and Liu, 2004; Pang and Lee, 2004), recursive neural networks (Socher et al., 2012, 2013; Tai et al., 2015), convolution neural networks (Kim, 2014) and recurrent neural networks (Liu et al., 2015).",
"Recently, pre-trained language models such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and Sen-tiLR (Ke et al., 2019) achieve high performance in sentiment analysis by constructing contextualized representation.",
"Inspired by these prior studies, we design a transformer-based neural network model to capture compositional sentience semantics by leveraging binary constituency parse tree.",
"Semantic Compositionality Semantic composition (Pelletier, 1994) has been widely studied in NLP literature.",
"For example, Mitchell and Lap-ata (2008) introduce operations such as addition or element-wise product to model compositional semantics.",
"The idea of modeling semantic composition is applied to various areas such as sentiment analysis (Socher et al., 2013; Zhu et al., 2016), semantic relatedness (Marelli et al., 2014) and capturing sememe knowledge (Qi et al., 2019).",
"In this paper, we demonstrate that the syntactic structure can be combined with contextualized representation such that the semantic compositionality can be better captured.",
"Our approach resembles to a few recent attempts (Harer et al., 2019; Wang et al., 2019) to integrate tree structures into self-attention.",
"However, our design is specific for the semantic composition in sentiment analysis.",
"We introduce SentiBERT , a model that captures compositional sentiment semantics based on constituency structures of sentences.",
"SentiBERT consists of three modules: 1) BERT; 2) a semantic composition module based on an attention network; 3) phrase and sentence sentiment predictors.",
"The three modules are illustrated in Figure 2 and we provide an overview in below.",
"BERT We incorporate BERT (Devlin et al., 2019) as the backbone to generate contextualized 3697 representation of input sentence.",
"Semantic Composition Module This module aims to obtain effective phrase representation guided by the contextualized representation and constituency parsing tree.",
"To refine phrase representation based on the structural information and its constituencies, we design a two-level attention mechanism: 1) Attention to Tokens and 2) Attention to Children .",
"Phrase Node Prediction SentiBERT is supervised by phrase-level sentiment labels.",
"We use cross-entropy as the loss function for learning the sentiment predictor.",
"In this section, we describe the attention networks for sentiment semantic composition in detail.",
"We first introduce the notations.",
"s = [ w 1 , w 2 , ..., w n ] denotes a sentence which consists of n words.",
"phr = [phr 1 , phr 2 , ..., phr m ] denotes the phrases on the binary constituency tree of sentence s.",
"h = [ h 1 , h 2 , ..., h n ] is the contextualized representation of tokens after forwarding to a fully-connected layer, where h t R d .",
"Suppose st i and en i are beginning and end indices of the i -th phrase where w st i , w st i +1 , ..., w en i are constituent tokens of the i -th phrase.",
"The corresponding token representation is [ h st i , h st i +1 , ..., h en i ] .",
"p i is the phrase representation of the i -th phrase.",
"Attention to Tokens Given the contextualized representations of the tokens covered by a phrase.",
"We first generate phrase representation v i for a phrase i by the following attention network.",
"q i = 1 en i st i + 1 en i (cid:2) j = st i h j , t j = Attention( q i , h j ) , st i j en i , a j = exp( t j ) (cid:3) en i k = st i exp( t k ) , o i = en i (cid:2) j = st i a j h j .",
"(1) In Eq.",
"(1), we first treat the averaged representation for each token as the query, and then allocate attention weights according to the correlation with each token.",
"a j represents the weight distributed to the j -th token.",
"We concatenate the weighted sum o i and q i and feed it to forward networks.",
"Lastly, we obtain the initial representation for the phrase v i R d based on the representation of constituent tokens.",
"The detailed computation of attention mechanism is shown in Appendix A.1.",
"Attention to Children Furthermore, we refine phrase representations in the second layer based on constituency parsing tree and the representations obtained in the first layer.",
"To aggregate information based on hierarchical structure, we develop the following network.",
"For each phrase, the attention network computes correlation with its children in the binary constituency parse tree and itself.",
"Assume that the indices of child nodes of the i -th phrase are lson and rson .",
"Their representations generated from the first layer are v i , v lson , and v rson , respectively.",
"We generate the attention weights r lson , r rson and r i over the i -th phrase and its left and right children by the following.",
"c lson = Attention( v i , v lson ) , c rson = Attention( v i , v rson ) , c i = Attention( v i , v i ) , r lson , r rson , r i = Softmax( c lson , c rson , c i ) .",
"(2) Then the refined representation of phrase i is computed by f i = r lson v lson + r rson v rson + r i v i .",
"Finally, we concatenate the weighted sum f i and v i and feed it to forward networks with SeLU (Klambauer et al., 2017) and GeLU activations (Hendrycks and Gimpel, 2017) and layer normalization (Ba et al., 2016), similar to Joshi et al. (2020) to generate the final phrase representation p i R d .",
"Note that when the child of i -th phrase is token node, the attention mechanism will attend to the representation of all the subtokens the token node covers.",
"Inspired by BERT, the training objective of SentiBERT consists of two parts: 1) Masked Language Modeling.",
"Some texts are masked and the model learn to predict them.",
"This objective allows the model learn to capture the contextual information as in the original BERT model.",
"2) Phrase Node Prediction.",
"We further consider training the model to predict the phrase-level sentiment label based on the aforementioned phrase representations.",
"This allows SentiBERT lean to capture the compositional sentiment semantics.",
"Similar to BERT, in the 3698 transfer learning setting, pre-trained SentiBERT model can be used to initialize the model parameters of a downstream model.",
"We evaluate SentiBERT on the SST dataset.",
"We then evaluate SentiBERT in a transfer learning setting and demonstrate that the compositional sentiment semantics learned on SST can be transferred to other related tasks.",
"We evaluate how effective SentiBERT captures the compositional sentiment semantics on SST dataset (Socher et al., 2013).",
"The SST dataset has several variants.",
"SST-phrase is a 5-class classification task that requires to predict the sentiment of all phrases on a binary constituency tree.",
"Different from Socher et al. (2013), we test the model only on phrases (non-terminal constituents) and ignore its performance on tokens.",
"SST-5 is a 5-class sentiment classification task that aims at predicting the sentiment of a sentence.",
"We use it to test if SentiBERT learns a better sentence representation through capturing compositional sentiment semantics.",
"Similar to SST-5, SST-2 and SST-3 are 2-class and 3-class sentiment classification tasks.",
"However, the granularity of the sentiment classes is different.",
"Besides, to test the transferability of SentiBERT , we consider several related datasets, including Twitter Sentiment Analysis (Rosenthal et al., 2017), Emotion Intensity Classification (Mo-hammad et al., 2018) and Contextual Emotion Detection (EmoContext) (Chatterjee et al., 2019).",
"Details are shown in Appendix A.2.",
"We build SentiBERT on the HuggingFace library 1 and initialize the model parameters using pre-trained BERT-base and RoBERTa-base models whose maximum length is 128, layer number is 12, and embedding dimension is 768.",
"For the training on SST-phrase, the learning rate is 2 10 5 , batch size is 32 and the number of training epochs is 3.",
"For masking mechanism, to put emphasis on 1 https://github.com/huggingface modeling sentiments, the probability of masking opinion words which can be retrieved from SentiWordNet (Baccianella et al., 2010) is set to 20%, and for the other words, the probability is 15%.",
"For fine-tuning on downstream tasks, the learning rate is { 1 10 5 1 10 4 } , batch size is { 16 , 32 } and the number of training epochs is 1 5 .",
"We use Stanford CoreNLP API (Manning et al., 2014) to obtain binary constituency trees for the sentences of these tasks to keep consistent with the settings on SST-phrase.",
"Note that when fine-tuning on sentence-level sentiment and emotion classification tasks, the objective is to correctly label the root of tree, instead of targeting at the [CLS] token representation as in the original BERT.",
"We first compare the proposed attention networks ( SentiBERT w/o BERT) with the following baseline models trained on SST-phrase corpus to evaluate the effectiveness of the architecture design: 1) Recursive NN (Socher et al., 2013); 2) GCN (Kipf and Welling, 2017); 3) Tree-LSTM (Tai et al., 2015); 4) BiLSTM (Hochreiter and Schmidhuber, 1997) w/ Tree-LSTM.",
"To further understand the effect of using contextualized representation, we compare SentiBERT with the vanilla pre-trained BERT and its variants which combine the four mentioned baselines and BERT.",
"The training settings remain the same with SentiBERT .",
"We also initialize SentiBERT with pre-trained parameters of RoBERTa ( SentiBERT w/ RoBERTa) and further compare it with its variants.",
"As shown in Table 1, SentiBERT and SentiBERT w/ RoBERTa substantially outperforms their corresponding variants and the networks merely built on the tree.",
"Specifically, we first observe that though our attention network ( SentiBERT w/o BERT) is simple, it is competitive with Recursive NN, GCN and Tree-LSTM.",
"Besides, SentiBERT largely outperforms SentiBERT w/o BERT by leveraging contextualized representation.",
"Moreover, the results manifest that SentiBERT and SentiBERT w/ RoBERTa outperform the BERT and RoBERTa, indicating the importance of incorporating syntactic guidance.",
"Though the designed models are effective, we are curious how beneficial the compositional sentiment semantics learned on SST can be transferred to other tasks.",
"We compare SentiBERT with pub-3699 Models SST-phrase SST-5 Recursive NN 58.33 46.53 GCN 60.89 49.34 Tree-LSTM 61.71 50.07 BiLSTM w/ Tree-LSTM 61.89 50.45 BERT w/ Mean pooling 64.53 50.68 BERT w/ GCN 65.23 54.56 BERT w/ Tree-LSTM 67.39 55.89 RoBERTa w/ Mean pooling 67.73 56.34 SentiBERT w/o BERT 61.04 50.31 SentiBERT 68.31 56.10 SentiBERT w/ RoBERTa 68.98 56.87 Table 1: The averaged accuracies on SST-phrase and SST-5 tasks (%) for 5 runs.",
"lished models BERT, XLNet, RoBERTa and their variants on benchmarks mentioned in Section 4.1.",
"Specifically, BERT' indicates the model trained on the raw texts of the SST dataset.",
"BERT w/ Mean pooling' denotes the model trained on SST, whose phrase and sentence representation is computed by mean pooling on tokens.",
"BERT w/ Mean pooling' merely leverages the phrases' range information rather than syntactic structural information.",
"Sentiment Classification Tasks The evaluation results of sentence-level sentiment classification on the three tasks are shown in Table 2. Despite the difference among tasks and datasets, from experimental results, we find that SentiBERT has competitive performance compared with various baselines.",
"SentiBERT achieves higher performance than the vanilla BERT and XLNet in tasks such as SST-3 and Twitter Sentiment Analysis.",
"Besides, SentiBERT significantly outperform Models Emotion Intensity EmoContext BERT 65.2 73.49 RoBERTa 66.4 74.20 SentiBERT w/o Pre-training 66.0 73.81 SentiBERT 66.5 74.23 SentiBERT w/ RoBERTa 67.2 74.67 Table 3: The averaged results on several emotion classification tasks (%) for 5 runs.",
"SentiBERT w/o BERT.",
"This demonstrates the importance of leveraging pre-trained BERT model.",
"Moreover, SentiBERT outperforms BERT w/ Mean pooling.",
"This indicates the importance of modeling the compositional structure of sentiment.",
"Emotion Classification Tasks Emotion detection is different from sentiment classification.",
"However, these two tasks are related.",
"The task aims to classify fine-grained emotions, such as happiness, fearness, anger, sadness, etc.",
"It is challenging compared to sentiment analysis because of various emotion types.",
"We fine-tune SentiBERT and SentiBERT w/ RoBERTa on Emotion Intensity Classification and EmoContext.",
"Table 3 shows the results on the two emotion classification tasks.",
"Similar to the results in sentiment classification tasks, SentiBERT obtains the best results, further justifying the transferability of SentiBERT .",
"We conduct experiments on SST-phrase using BERT-base model as backbone to demonstrate the effectiveness and interpretability of the SentiBERT architecture in terms of semantic compositionality.",
"We also explore potential of the model when lacking phrase-level sentiment information.",
"In order to simplify the analysis of the change of sentiment polarity, we convert the 5-class labels to to 3-class: the classes very negative' and negative' are converted to be negative'; the classes very positive' and positive' are converted to be positive'; the class neutral' remains the same.",
"The details of statistical distribution in this part is shown in Appendix A.3.",
"We consider the following baselines to evaluate the effectiveness of each component in SentiBERT .",
"First we design BERT w/ Mean pooling as a base model, to demonstrate the ne-Figure 3: Evaluation for local difficulty.",
"The figure shows the accuracy difference on phrase node sentiment prediction with BERT w/ Mean pooling for different local difficulty.",
"cessity of incorporating syntactic guidance and implementing aggregation on it.",
"Then we compare SentiBERT with alternative aggregation approaches, Tree-LSTM, GCN and w/o Attention to Children.",
"We investigate how effectively SentiBERT captures compositional sentiment semantics.",
"We focus on how the representation in SentiBERT captures the sentiments when the children and parent in the constituency tree have different sentiments (i.e., sentiment switch) as shown in the red boxes of Figure 1. Here we focus on the sentiment switches between phrases.",
"We assume that the more the sentiment switches, the harder the prediction is.",
"We analyze the model under the following two scenarios: local difficulty and global difficulty .",
"Local difficulty is defined as the number of sentiment switches between a phrase and its children.",
"As we consider binary constituency tree.",
"The maximum number of sentiment switches for each phrase is 2. Global difficulty indicates number of sentiment switches in the entire constituency tree.",
"The maximum number of sentiment switches in the test set is 23.",
"The former is a phrase-level analysis and the latter is sentence level.",
"We compare SentiBERT with aforementioned baselines.",
"We group all the nodes and sentences in the test set by local and global difficulty.",
"Results are shown in Figure 3 and Figure 4. Our model achieves better performance than baselines in all situations.",
"Also, we find that with the increase of difficulty, the gap between our models Figure 4: Evaluation for global difficulty.",
"and baselines becomes larger.",
"Especially, when the sentiment labels of both children are different from the parent node (i.e., local difficulty is 2), the performance gap between SentiBERT and BERT w/ Tree-LSTM is about 7% accuracy.",
"It also outperforms the baseline BERT model with mean pooling by 15%.",
"This validates the necessity of structural information for semantic composition and the effectiveness of our designed attention networks for leveraging the hierarchical structures.",
"Negation: Since the negation words such as no' , n't' and not' will cause the sentiment switches, the number of negation words also reflects the difficulty of understanding sentence and its constituencies.",
"We first group the sentences by the number of negation words, and then calculate the accuracy of the prediction on their constituencies respectively.",
"In test set, as there are at most six negation words and the amount of sentences with above three negation words is small, we separate all the data into three groups.",
"Results are provided in Figure 5. We observe SentiBERT performs the best among all the models.",
"Similar to the trend in local and global difficulty experiments, the gap between SentiBERT and other baselines becomes larger with increase of negation words.",
"The results show the ability of SentiBERT when dealing with negations.",
"Contrastive Relation: We evaluate the effectiveness of SentiBERT with regards to tackling contrastive relation problem.",
"Here, we focus on the contrastive conjunction but .",
"We pick up the sentences containing word but' of which the sentiments of left and right parts are different.",
"In our analysis, a X but Y' can be counted as correct if and only if the sentiments of all the phrases in triple-let (X but Y', X' and Y') are predicted correctly.",
"Table 4 demonstrates the results.",
"SentiBERT outperforms other variants of BERT about 1%, demonstrating its ability in capturing contrastive relation in sentences.",
"We showcase several examples to demonstrate how SentiBERT performs sentiment semantic composition.",
"We observe the attention distribution among hierarchical structures.",
"In Figure 7, we demonstrate two sentences of which the sentiments of all the phrases are predicted correctly.",
"We also visualize the attention weights distributed to the child nodes and the phrases themselves to see which part might contribute more to the sentiment of those phrases.",
"SentiBERT performs well in several aspects.",
"First, SentiBERT tends to attend to adjectives such as frenetic' and funny' , which contribute to the phrases' sentiment.",
"Secondly, facing negation words, SentiBERT considers them and a switch can be observed between the phrases with and without negation word (e.g., not really funny' and re-ally funny' ).",
"Moreover, SentiBERT can correctly analyze the sentences expressing different sentiments in different parts.",
"For the first case, the model concentrates more on the part after but' .",
"We are also interested in analyzing how much phrase-level supervision SentiBERT needs in order to capture the semantic compositionality.",
"We vary the amount of phrase-level annotations used in training SentiBERT .",
"Before training, we randomly sample 0% to 100% with a step of 10% of labels from SST training set.",
"After pre-training on them, we fine-tune SentiBERT on tasks SST-5, SST-3 and Twitter Sentiment Analysis.",
"During fine-tuning, for the tasks which use phrase-level annotation, such as SST-5 and SST-3, we use the same phrase-level annotation during pre-training and the sentence-level annotation; for the tasks which do not have phrase-level annotation, we merely use the sentence-level annotation.",
"Results in Figure 6 show that with about 30%-50% of the phrase labels on SST-5 and SST-3, the model is able to achieve competitive results compared with XLNet.",
"Even without any phrase-level supervision, using 70%-80% of phrase labels in pre-training allows SentiBERT competitive with XLNet on the Twitter Sentiment Analysis dataset.",
"Furthermore, we find the confidence of about 40-50% of phrase nodes in SST-3 task is above 0.9 and the accuracy of predicting these phrases is above 90% on the SST dataset.",
"Considering the previous results, we speculate if we produce part of the phrase labels on generic texts, choose the predicted labels with high confidence and add them to the original SST training set during the training process, the results might be further improved.",
"We proposed SentiBERT , an architecture designed for capturing better compositional sentiment semantics.",
"SentiBERT considers the necessity of contextual information and explicit syntactic guidelines for modeling semantic composition.",
"Experiments show the effectiveness and transferability",
"of SentiBERT .",
"Further analysis demonstrates its interpretability and potential with less supervision.",
"For future work, we will extend SentiBERT to other applications involving phrase-level annotations.",
"We would like to thank the anonymous reviewers for the helpful discussions and suggestions.",
"Also, we would thank Liunian Harold Li, Xiao Liu, Wasi Ahmad and all the members of UCLA NLP Lab for advice about experiments and writing.",
"This material is based upon work supported in part by a gift grant from Taboola."
] | [
"objective",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"objective",
"result",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"To translate large volumes of text in a globally connected world, more and more translators are integrating machine translation (MT) and post-editing (PE) into their translation work-flows to generate publishable quality translations.",
"While this process has been shown to save time and reduce errors, the task of translation is changing from mostly text production from scratch to fixing errors within useful but partly incorrect MT output.",
"This is affecting the interface design of translation tools, where better support for text editing tasks is required.",
"Here, we present the first study that investigates the usefulness of mid-air hand gestures in combination with the keyboard (GK) for text editing in PE of MT. Guided by a gesture elicitation study with 14 freelance translators, we develop a prototype supporting midair hand gestures for cursor placement, text selection, deletion, and reordering.",
"These gestures combined with the keyboard facilitate all editing types required for PE.",
"An evaluation of the prototype shows that the average editing duration of GK is only slightly slower than the standard mouse and keyboard (MK), even though participants are very familiar with the latter, and relative novices to the former.",
"Furthermore, the qualitative analysis shows positive attitudes towards hand gestures for PE, especially when manipulating single words.",
"In a well-connected world, translation is of ever-increasing importance (Bassnett, 2013).",
"To meet translation demands, machine translation (MT) is often employed as a cheaper and faster alternative to human translation (HT) (O'Brien, 2012).",
"Even though MT has improved drastically over the last 5 years, discussions about reaching human parity are still ongoing (Laubli et al., 2020) and limited to a small set of language pairs and domains for which ample training data is available.",
"For most application scenarios, however, MT quality is far from reaching the quality of highly trained professionals.",
"In an attempt to combine the best of both worlds, post-editing (PE) is becoming common practice, where human translators use raw MT output and make the necessary changes to produce an acceptable level of quality (Kopo-nen, 2016).",
"Although translators have approached PE with fear and skepticism (Lagoudaki, 2009), more recent studies found that nowadays translators are more open to it and that much of the original dislike was attributed to outdated perceptions of MT quality (Plitt and Masselot, 2010; Green et al., 2013).",
"Independent of translators' perceptions, studies found that PE increases productivity and decreases errors compared to translation from scratch (Green et al., 2013).",
"PE changes the translation task from mostly text generation to text editing, which involves an increased usage of navigation and deletion keys (Toral et al., 2018).",
"As a result, translators need better support with text editing operations, which raises the question whether interaction modalities other than mouse and keyboard can be beneficial for PE.",
"An interaction modality that has gained attention in other research areas (Koutsaba-sis and Vogiatzidakis, 2019) but so far remains unexplored for PE is mid-air hand gestures .",
"In this paper, we",
"(i) investigate which mid-air gestures combined with the keyboard (GK) are suitable for which text-editing operations in PE,",
"(ii) build a prototype supporting PE using GK, and",
"(iii) analyze editing times and subjective feedback on mid-air hand gestures compared to mouse and keyboard (MK) for specific PE operations.",
"To address these goals, we conducted a gesture elicitation study (GES) with professional translators, resulting in a set of gestures for different editing tasks, which were then implemented in a prototype.",
"Our experiment shows that, surprisingly, editing durations for most PE tasks were very similar in the conditions GK and MK, even though participants were much more experienced with the latter.",
"Furthermore, participants prefer manipulating single items 1 using gestures, while manipulating a group of items , which involves more complex text selection, received poorer subjective feedback.",
"In this section, we present related research on translation environments, multi-modal approaches to PE, and mid-air gestures for text editing tasks.",
"In recent years, most translators use computer-aided translation (CAT) tools for translation (Cop-pers et al., 2018).",
"CAT tools are workflow systems offering features like translation memory (TM), MT, or terminology management (Van den Bergh et al., 2015; Koskinen and Ruokonen, 2017).",
"Translators prefer to use CAT tools as they enhance terminology consistency, increase productivity, and improve the general quality of translations (Rossi and Chevrot, 2019; Moorkens and O'Brien, 2017).",
"While TM is still often valued more than MT (Moorkens and O'Brien, 2017), a recent study by Vela et al. (2019) shows that professional translators who were given a choice between translation from scratch, TM, and MT, chose MT in 80% of cases, highlighting the importance of PE of MT. Apart from translators' preference, Toral et al. (2018) found that PE phrase-based and neural MT (PBSMT and NMT) output increased productivity by 18% and 36% respectively compared to HT.",
"PE also changes the interaction patterns compared to manual translation from scratch (Carl and Jensen, 2010), leading to a significantly reduced amount of mouse and keyboard events (Green et al., 2013).",
"At the same time, navigational and deletion key usage increases by 72% during PE of NMT compared to HT (Toral et al., 2018).",
"This motivates our decision to explore modalities other than MK for PE and to specifically focus on efficient navigation and deletion.",
"1 Item(s) refers to word(s) and/or punctuation mark(s).",
"e-pen.",
"Studies on mobile PE via touch and speech (O'Brien et al., 2014; Torres-Hostench et al., 2017) show that participants especially like reordering words through touch drag and drop, and prefer voice input when translating from scratch, but stick to the iPhone keyboard for small changes.",
"Zapata (2016) also explores the use of voiceand touch-enabled devices; however, their study did not focus on PE, and used Microsoft Word instead of a proper CAT environment.",
"Teixeira et al. (2019) explore a combination of touch and speech for translation from scratch, translation using TM, and translation using MT and found that their touch implementation received poor feedback, while dictation turned out to be quite useful.",
"We started our research on multi-modal CAT tools with an elicitation study (Herbig et al., 2019), which showed that pen, touch, and speech interaction, as well as combinations thereof, should be combined with mouse and keyboard to improve PE of MT. A prototype based on the proposed interactions allows users to directly cross out or hand-write new text, drag and drop words for reordering, or use spoken commands to update the text in place (Herbig et al., 2020b).",
"Its evaluation with professional translators further showed that depending on the editing operation, different input modalities performed well (Herbig et al., 2020a).",
"To date, mid-air gestures have only been addressed in our elicitation study (Herbig et al., 2019), where participants did not expect them to be particularly useful.",
"However, participants only considered gestures on their own (i.e. also for text entry), and thus the combination with the keyboard merits further investigation, both in terms of an elicitation study and even more so in a practical evaluation of a prototype.",
"Hand gestures provide an intuitive and natural way of interaction (Sharma and Verma, 2015; Ortega and Nigay, 2009), but the design of appropriate gestures depends on the application type and context (Wachs et al., 2011; Weichert et al., 2013; Nielsen et al., 2003).",
"Gestures must be easy to learn and memorize, comfortable to perform, and should be metaphorically meaningful (Wachs et al., 2011; Weichert et al., 2013).",
"Ortega and Nigay (2009) explored the use of mid-air finger pointing to replace the mouse and showed that this approach significantly reduces the switching time compared to MK (almost to zero).",
"However, research on text editing using hand gestures is scarce.",
"One exception is Rives et al. (2014), who presented the idea of using gestures to perform the operations cut, copy, paste, select, undo, and delete to edit a document using gestures.",
"In their concept, the user enters the edit mode through a special gesture and then draws in the air to perform the above operations, e.g. a X for deletion.",
"A GES is a form of participatory design (Morris et al., 2014) where users are incorporated in the design process to inform an appropriate gesture set for a given application.",
"Important aspects include leading participants away from technical thinking (Nielsen et al., 2003), making them assume that gesture recognition is perfect, and considering their behavior as always acceptable (Wobbrock et al., 2009).",
"They should only be informed about the essential details of the task to avoid bias towards particular approaches (Wobbrock et al., 2005).",
"We conduct a GES for three reasons.",
"Firstly, there is no universal gesture set suitable for all applications (Nielsen et al., 2003).",
"Secondly, users prefer gestures designed through elicitation studies, because professional designers tends to generate more physically and conceptually complex gestures (Morris et al., 2014).",
"Thirdly, to the best of our knowledge, there is no other GES for text editing using GK which we could rely on.",
"In our GES, we employed the guessability approach (Wobbrock et al., 2005) which is intended to increase immediate usage of interfaces.",
"It consists of three phases: (1) defining so-called referents (i.e. common operations) that should be achievable through the system, (2) asking participants to propose a gesture for each referent, and (3) analyzing the collected data to generate the final gesture set.",
"Due to the COVID-19 pandemic we conducted an online GES.",
"Prior to commencing the study, ethical clearance was sought from the university ethical review board.",
"The study took 30 to 65 minutes per participant (avg: 46 minutes).",
"Participants: Fourteen right-handed freelance translators (with 14 different nationalities, 7 female and 7 male) were hired to participate in the study (avg age: 28, SD: 4.56).",
"Years of professional experience ranged from 2 to 15 years (avg: 5.29, SD: 3.43), offering a total of 19 language pairs.",
"In terms of CAT tool experience, about 2/3 of the participants reported using CAT tools to aid translation, with 1 to 4 years of experience.",
"Overall, participants were often in the earlier stages of their professional careers.",
"Three of the participants already had experience with gesture-based interfaces such as a TV remote control.",
"However, they rated their level of experience with gestural interfaces as Bad to Neutral.",
"Referents: Referents are described as the effect which is triggered by a gesture (Wobbrock et al., 2009).",
"The referents used in elicitation studies are an essential part, since the results established are limited to this set.",
"In our case, referents are PE operations; we will thus use referents and operations interchangeably.",
"To find good referents, we looked at different PE task classifications discussed in the literature.",
"Popovic et al. (2014) propose 5 PE operations: correcting word form, correcting word order, adding omission, deleting addition, and correcting lexical choice.",
"Koponen (2012) additionally distinguishes between moving single words or groups of words and the distance of the movement.",
"Based on these studies as well as our previous elicitation procedure (Herbig et al., 2019), we propose the referents presented below as PE tasks for which we explore gestural input.",
"I : Insertion D s : Deleting a single item D g : Deleting a group of items RP s : Replacing a single item RP g : Replacing a group of items RO s : Reordering a single item RO g : Reordering a group of items Performing those referents implicitly includes other operations, namely selecting a position, a word, or a group of words/characters.",
"Procedure: We interviewed each participant online via a video conferencing platform.",
"The first part of the study introduced PE of MT, discussing the current use of mouse and keyboard in CAT tools, and presenting the idea of mid-air hand gestures for PE without showing any concrete gestures that could induce bias.",
"Participants were then asked to fill out an online questionnaire capturing their demographics as well as other questions concerning CAT tools and MT in general.",
"They were also informed that they should assume perfect recognition and that all proposals are valid.",
"After each gesture proposal, participants supplied subjective ratings on 7-point Likert scales (7 = strongly agree) as to whether the gesture is:",
"(a) a good match for its intended purpose,",
"(b) easy to perform, and",
"(c) a good alternative to MK.",
"Additionally, we used a think-aloud protocol and videotaped the session for subsequent analysis.",
"Our referents were counterbalanced to avoid systematic errors.",
"Analysis: For the analysis, we grouped similar gestures based on the number of hands involved, their physical attributes and movement direction.",
"We report the largest groups per referent, but also the agreement rate (AR), characterizing the level of consensus between participants' proposals elicited (Vatavu and Wobbrock, 2015).",
"A high AR suggests that the most frequent gesture proposal is guessable and intuitive.",
"However, less frequent proposals can still yield interesting insights.",
"Unlike static gestures, dynamic gestures are hard to illustrate through images; therefore, we created a simple website that shows recorded animations of gestures for each participant and groups them based on the referent 2 .",
"While analyzing the data, consistent patterns emerged: Similar to the way the mouse is used, participants performed all referents by first selecting the text, then performing the editing operations, e.g. deleting.",
"Consequently, we decided in our analysis to separate the selection gestures from the editing operation gestures, analyzing and discussing each separately.",
"In addition, the proposed selection gestures are divided into two types: the selection of a single item and the selection of a group of items.",
"Group Selection: 8 unique gestures were proposed for group selection for the referents D g , RP g , and RO g 3 , with the same AR of 0.13 for each.",
"Two of these gestures were the most common, namely both indices (pointing with index fingers and moving them apart to select: see Figure 1a) and index + thumb (pointing with pinched index finger and 2 https://rashad-j.github.io/ conceptual-study 3 Detailed results are shown on our website. thumb and separating them to select a range).",
"Both indices was rated higher on ease than index + thumb , but received almost identical ratings for good match and alternative, indicating a slight preference for using both index fingers.",
"The remaining 6 proposals were interesting ideas like using a certain number of fingers to specify the number of words to select, however, none of these proposals reached agreement.",
"Single Item Selection: Participants proposed 5, 9, and 8 different gestures for the referents D s , RP s , and RO s , respectively.",
"Consequently, the high number of different proposals for replacing and reordering reduced the AR to 0.08 ( RP s ) and 0.09 ( RO s ) compared to 0.16 for D s .",
"Participants mostly proposed the same single item selection gesture for all subsequent referents, highlighting the importance of counter-balancing.",
"However, the index + thumb and both indices appear to also be preferred in selecting a single item, but with slightly varying agreement scores compared to group selection.",
"In addition, the gesture pointing (where a participant points with the index finger to place the cursor on the item) was highly preferred for single item selection.",
"The double-tap gesture was also proposed 3, 2, and 1 times for the referents D s , RO s , and RP s , respectively.",
"When asked about the reasons for their proposals, participants (p) gave responses such as p3: It is easy and intuitive or p5: It is really easy to select the start and then slide it to select.",
"Editing Operations: Unlike selection gestures, editing operations received very distinct gesture proposals except for a slight similarity between deletion and replacement (having one gesture proposal in common).",
"For the deletion referents, 9 unique gestures were proposed in single and group referents with an AR of 0.08 for both.",
"Three gestures appeared to be the most common among the participants.",
"Those were: move right index down (Figure 1b), move right index up , and move the right hand up (Figure 1c).",
"We decided to merge the index movement up and down into one gesture for two reasons: first, it is more intuitive to move the index finger up and then down (or down and up) because the user will have to move his hand back to a neutral position; second, participants p6 and p7 elaborated that moving the index finger up or down to delete is equally acceptable for them.",
"(e) Reordering by grabbing, moving, and releasing.",
"Moving the right hand up to delete was also common for the replace referent for both RP s and RP g .",
"In general gestures for the replacement referent received a slightly higher AR of 0.10 and 0.18 for single and group referent respectively.",
"Analyzing participants' thoughts, which were captured via think-aloud protocol, it appears that they wanted to delete first and then type the replacement item.",
"Another common proposal for replacement was suggested by almost half of the participants (6/14), namely to simply type after selecting a text.",
"Moreover, there were some proposals without agreement, e.g. p13 came up with the idea to strike-through text with the right index to delete and then type, whereas p14 suggested forming an X with his index fingers to delete before using the keyboard.",
"The reordering referents received three distinct gestures with AR of 0.16 and 0.26 for single and group referents respectively.",
"The first one was to select and move the text with both hands by moving them simultaneously (Figure 1d).",
"This gesture was proposed by 4 participants in RP s and 6 participants in RP g .",
"The second gesture was to point with the right index finger and start moving it to move the text immediately after selecting (proposed by 4 participants in both RP s and RP g ).",
"The third gesture was to grab with the right hand and move the hand to reorder the text, then open it to release (Figure 1e).",
"This gesture was proposed for RO s by only 3 participants.",
"Other individual proposals were made, e.g. p7 preferred to pinch using index finger and thumb, then move her hand to move the text, and then release the pinch to place the item.",
"Finally, the insertion referent received 5 unique gestures.",
"One of the proposals was to point with the right index finger and then move it to place the cursor in the required place.",
"This gesture was suggested by 9 out of 14 participants; hence, we see a high AR of 0.4.",
"It was also referred to as pointing for single item selection.",
"Once the cursor was placed in the target position, the user would switch to the keyboard for typing.",
"Together, these findings constitute a gesture set for text editing.",
"Our separation into selection (for single items and groups) and editing operations makes the PE tasks more consistent and better represents our participants' mindsets.",
"What is interesting is that selection of single items achieved high agreement on using a gesture to simply place the cursor on the item, without actually selecting it from start to end as with the mouse.",
"The deletion and replacement referents shared some gesture proposals because participants often wanted to replace by deletion followed by typing.",
"A further refinement to this set is presented below.",
"We used the GES results to define our final gesture set and implement a prototype.",
"For this, the frequently proposed gestures were explored in terms of implementation feasibility given the technology we are using.",
"If two gestures were conflicting, we dropped the less popular one; otherwise we slightly modified it to resolve the conflict.",
"For group selection , we found that the proposed index + thumb gesture practically fails upon selection across multiple lines; thus, we dropped it.",
"In contrast, using both indices can perform this kind of selection, so we implemented it as depicted in Figure 2.",
"Note that in contrast to the mouse, the group selection using both index fingers allows the user to manipulate both ends of the selection continuously instead of having one side fixed.",
"For single item/position selection , we only implemented pointing with the right index finger, as it already entails the double tap gesture.",
"For multi-line text, both single and group selection allow pointing with the index finger vertically and horizontally.",
"For deletion , D s and D g received similar gesture proposals.",
"Looking at the proposals in detail, we found that two participants also wanted to delete Figure 2: Mid-air gesture-based group selection by pointing with both indices.",
"with a hand down movement.",
"Thus, we implemented hand or finger movement down and up to offer consistent deletion possibilities (Figure 3).",
"For D s , it is sufficient if the cursor is placed somewhere on the word; there is no need to define the start and end of the word through a group selection.",
"Replacement can be achieved by either performing a group selection and typing directly, or by selecting a single item or group of items, deleting, and then typing.",
"Note that RP s can thus also be achieved without group selection.",
"The most complicated gestures were proposed for reordering ; the gestures are a compound of several sub-gestures.",
"Since reordering using the right index conflicts with cursor movement, we dropped it.",
"Moving both hands while in the selection position turned out to be difficult to perform, as maintaining the same distance between the hands at all times is challenging.",
"Therefore, we decided to merge it with the grab proposal; thus, after selection, a grab with the left hand indicates the start of the reordering process.",
"Then moving both hands or just the right index finger reorders the text (Figure 4).",
"Once the required position is reached, closing the right hand ends the reordering process and drops the text in the target position.",
"For single item reordering, it is again sufficient to place the cursor on the item without selecting the whole text.",
"The prototype was implemented as an extension to our open-source MMPE CAT interface (Her-big et al., 2020b,c) 4 .",
"MMPE allows translators to use input modalities such as speech, touch, pen, and eye tracking in combination with the standard mouse and keyboard.",
"However, it previously did not support mid-air gestures.",
"The main interface shows the source on the left, and the target on the right, with the currently edited segment enlarged.",
"This additional space turned out to be useful for hand gestures as it simplifies pointing.",
"In addition, all user interactions are logged.",
"MMPE uses Angular for the front-end, and node.js for the back-end, with WebSockets and REST APIs for the communication between them.",
"Our gesture detection relies on the Leap Motion Controller 5 , which is small in size (8cm * 3cm) and can be placed on the top of the keyboard (Figure 2).",
"The device provides frames of detected hands with 3D positions of finger joints, as well as some basic detection such as whether the fingers are extended or not.",
"Based on this information our gesture detection algorithm determines if one of the above gestures is being performed.",
"If only the right hand is detected with the index fingers extended, then the cursor will be updated based on hand movement.",
"Moving both index fingers selects the corresponding text in the interface (Figure 2).",
"When a deletion gesture is detected, the selected text (for group se-lection), or the word that the cursor is currently positioned on, is removed (Figure 3).",
"A grab with the left hand puts the currently selected text/word 4 https://github.com/NicoHerbig/MMPE 5 https://www.ultraleap.com/product/ leap-motion-controller/ containing the cursor in a reordering visualization.",
"Then, movements of the right index are tracked and move the highlighted text as well as an arrow indicator visualizing the currently calculated drop position.",
"Releasing the grab then places the text back into the input field at the indicated position (Figure 4).",
"To avoid unintended gestures while moving the hands back to the keyboard, the user can form a grab in both hands after executing a gesture.",
"Since people move their hands at different speeds, we further added sensitivity settings for gestures, similar to the standard mouse settings.",
"A video showing the interactions in practice can be found under: https://youtu.be/qIRYeojkFVc .",
"In contrast to the web-based elicitation study, we had to evaluate the prototype in-situ due to the hardware setup.",
"Given the COVID-19 situation, it was impossible to invite professional translators.",
"Therefore we had to conduct a study with our colleagues.",
"To mitigate the difference between non-translation professional subjects (computer scientists) and translation professionals, we ensured that similar to professional translators,",
"(i) all our participants have academic training (computing degrees instead of translation degrees),",
"(ii) that they are also highly familiar with traditional mouse and keyboard interfaces and use them in their day-to-day work,",
"(iii) all subjects have relevant language proficiency (source EN, target DE), and",
"(iv) all work in a multilingual EN-DE environment.",
"Furthermore, as the evaluation required participants only to perform pre-specified text editing operations, without involving any linguistic translation decisions, we hope to minimise the effect of not having translators as participants.",
"We use a methodology similar to that of our previous MMPE evaluation (Herbig et al., 2020a), however, here we compare a novel interaction modality (mid-air hand gestures) to mouse and keyboard:",
"Participants: Overall, 8 participants (7 male, 1 female) from the department of computer science took part in the experiment: 5 researchers, 2 PhD students, and 1 MSc student.",
"Their ages ranged from 24 to 39 (avg: 29, SD: 5).",
"All had English skills from B2 to C1 and were either German natives (7 of them) or had C1 German knowledge.",
"As computer scientists, they were all experienced keyboard users.",
"Participants were all right-handed and had normal vision.",
"Two of them indicated little experience with gesture-based interfaces, whereas the others reported a medium to very high level.",
"Apparatus: The main equipment consists of a 23 inch monitor, a NUC PC, a Leap Motion Controller, a standard wired mouse, and a standard keyboard with German layout.",
"The NUC PC is equipped with a processor of type Intel(R), Core i7 CPU @ 3.50 GHz, 16.0 GB of RAM, and an internal graphics processor capable of capturing 30 60 frames per second when used by the Leap Motion Controller.",
"Procedure: Prior to undertaking the study, ethical clearance was obtained from the ethical review board at the university.",
"The study consisted of 3 phases and took approximately 1 hour per participant.",
"The first phase introduced GK and the prototype interface, followed by capturing demographic information.",
"In the second phase, participants were given 10 15 minutes to explore GK to correct samples of incorrect MT output.",
"The third phase included the main experiment, in which participants performed a guided test to correct MT output in two conditions: mid-air gestures & keyboard (GK) and standard mouse & keyboard (MK).",
"For each of the referents from our elicitation study, 3 different segments had to be corrected in both conditions appearing in random order to capture comparable editing times.",
"The segments were taken from the WMT EN-DE 2018 news test set.",
"A single error was introduced per segment and a pop-up always told participants what error needed to be fixed and which modality to use.",
"After each referent (e.g. deleting a single item), participants were presented with the same three 7-point Likert scales as in our GES.",
"In addition we conducted semi-structured interviews to gather further feedback.",
"We had 2 conditions, 7 referents, and 3 segments per referent; thus, there were in total 2 7 3 = 42 segments to correct for each participant.",
"While this correction of pre-defined errors prevents us from drawing conclusions in a realistic setting, it allows us to explore each editing operation in isolation, including accurate time measures and subjective feedback, which is more important for a first prototype test.",
"Qualitative data was collected by the semi-structured interviews and Likert rating scales after each referent.",
"Figure 5a shows that operations manipulating single items were generally rated higher",
"than operations on groups of items.",
"D s was rated best, especially in terms of goodness and ease of use.",
"The majority of our participants commented that group selection was hard to perform, whereas the editing operations themselves were considered easy.",
"While comments differed depending on the referent, most of them were positive, and we frequently got statements such as it is great, [GK] felt like the same level of MK.",
"Quantitative data , shown in Figure 5b, captured the editing duration of both GK and MK for each referent, showing that the GK interquartile range was higher than the standard MK, except for RO g .",
"However, the most interesting finding was that, although the participants had years of experience using MK and were new to GK for text editing, the average editing time in the GK condition was very close to the average for MK in 4 out of 7 referents.",
"For analyzing statistical differences in our data, we ran Wilcoxon signed-rank tests since the normality assumption of t-tests was not fulfilled due to the small sample size.",
"As expected, given the limited amount of data, our statistical tests were unable to find significant differences between GK and MK for all operations 6 .",
"Similar to what we found in the qualitative analysis, the gestures operating on single items were more efficient than operations on groups of items in the GK condition.",
"D s was the fastest, followed by RP s and I .",
"On the other hand, group operations turned out to be the most time-consuming in both conditions, with the biggest differences between conditions for D g and RP g .",
"Interestingly, average editing time of RO g was nearly identical in both conditions, although the gesture-based approach showed more variance.",
"6 = 0 .",
"05 , P, P > , P = ( Ins = 0 . 641 , RP s = 0 . 312 , RP g = 0 . 945 , D s = 0 . 461 , D g = 0 . 461 , R s = 0 . 312 , R g = 0 . 383) In summary, the study has shown positive attitudes towards using mid-air hand gestures in combination with the keyboard for specific PE tasks.",
"Single item referents in particular received good feedback and were close to MK in terms of time measures.",
"Group selection was the main reason for disliking the GK and main source of additional editing time.",
"Based on the comments, the majority of participants found such group selections difficult to perform, especially when selecting across multiple lines, therefore, improvements should be made to the group selection in the future.",
"Overall, the results are encouraging, especially when considering the level of experience our participants had with MK and the short time for them to learn GK for text editing.",
"In particular the single item referents, and perhaps improved versions of the group referents, could provide benefit to the PE process as a complement, not replacement, to traditional mouse-and keyboard-based editing.",
"The use of MT and PE changes the task of translation from mostly text production to fixing errors within useful but partly incorrect MT output.",
"This affects the interface design of CAT tools, where translators need more support for text editing tasks.",
"The literature suggests that other interaction modalities than MK, or combinations thereof, could better support PE operations.",
"To the best of our knowledge, this is the first study that investigates the usefulness of mid-air hand gestures for PE of MT. Our GES with 14 freelance translators yielded a set of gestures to manipulate both single items and groups of items, which we further refined by considering conflicting gestures and exploring them practically.",
"The resulting prototype allows users to",
"(i) place the cursor by pointing with the index finger,",
"(ii) select ranges of text by pointing with both index fingers,",
"(iii) moving the hand or index finger up or down for deletion, and",
"(iv) reorder by selecting text, forming a grab with the left hand, pointing with the right index finger to the desired position, and releasing the grab to drop the text.",
"These gestures, combined with the keyboard, support all text manipulations required for PE.",
"Due to COVID-19, only a small-scale prototype evaluation with non-translator participants was possible.",
"Nonetheless, as the prototype design was guided by an elicitation study with translation professionals which usually leads to well-perceived interfaces and since we designed the study to mitigate bias induced by a sub-optimal participant sample, we expect that professional translators would have given us comparable feedback.",
"The findings overall suggest that GK could be a suitable interaction modality for PE and thus merits further research: Even though participants had years of experience with MK, our quantitative analysis of editing time showed that GK was only slightly slower for most operations, especially when manipulating single items.",
"Similarly, qualitative data shows that manipulating single items was rated higher than operations working on groups of items, as participants found the group selection gesture cumbersome to perform.",
"This finding indicates that further effort should be invested in improving group operations, which are also common in PE (e.g. by exploring if a different placement of the detection device could increase detection accuracy).",
"However, the appealing results on single item operations and the satisfactory results on group operations bode well and warrant further exploration with professional translators in a realistic PE scenario.",
"We do expect that after using the interface for a longer period of time, users will become more effective, as is common with other interfaces: the new interface is competing with decades of MK muscle-memory training.",
"However, only future long-term studies can show if editing times with GK will become as low as or even lower than with MK approaches.",
"Apart from efficiency, participants in our previous studies (Herbig et al., 2019) argued for having multiple suitable options to interact with text, instead of performing the same movements all day long.",
"Therefore, it is not just a question of speed but also user satisfaction and health: Additional modalities may help guard against carpal-tunnel syndrome and provide exercise alternatives in a seated environment.",
"To conclude, this new interaction modality, which so far was overlooked by research on CAT tools and post-editing, performs better than expected and therefore warrants further investigation.",
"Overall, we hope that future research will pick up the insights from the first and second study and help advance the state-of-the-art in PE.",
"This research was funded in part by the German Research Foundation (DFG) under grant number GE 2819/2-1 (project MMPE).",
"We thank all participants of the two studies for their valuable feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"The study of argumentation and the development of argument mining tools depends on the availability of annotated data, which is challenging to obtain in sucient quantity and quality.",
"We present a method that breaks down a popular but relatively complex discourse-level argument annotation scheme into a simpler, iterative procedure that can be applied even by untrained annotators.",
"We apply this method in a crowdsourcing setup and report on the reliability of the annotations obtained.",
"The source code for a tool implementing our annotation method, as well as the sample data we obtained (4909 gold-standard annotations across 982 documents), are freely released to the research community.",
"These are intended to serve the needs of qualitative research into argumentation, as well as of data-driven approaches to argument mining.",
"Empirical study of argumentation requires examples drawn from authentic, human-authored text.",
"Likewise, the applications of computational argumentation, such as argument mining, can require significant amounts of argument-annotated data to achieve reasonable performance.",
"However, this data can be challenging to obtain in sucient quantity and quality, particularly for discourse-level argumentation.",
"This is because discourse-level annotation schemes are necessarily complex with respect to discrimination and delimitation (i.e., the variety of markable elements in the text and how to define their boundaries), expressiveness (i.e., the need to tag relationships between annotated elements), and context weighting (i.e., the amount of context around markable units that needs to be considered) (Fort et al., 2012).",
"Successfully applying such schemes typically requires expensive and laborious work by expert-trained annotators.",
"In this paper, we present a method that facilitates the application of one such discourse-level argument annotation scheme (Stab and Gurevych, 2014).",
"This scheme has been widely cited and used in argumentation studies (e.g., Lippi and Torroni, 2015; Persing and Ng, 2015; Nguyen and Litman, 2015; Persing and Ng, 2016; Ghosh et al., 2016; Eger et al., 2017; Nguyen and Litman, 2018), and while it is fairly coarse-grained, it is expensive to apply to new texts.",
"Our method breaks down the annotation process into incremental, intuitive steps, each focusing on a small portion of the overall annotation scheme.",
"We apply this method in a crowdsourcing setup with annotators who receive no training other than a brief set of annotation guidelines, as well as in a more traditional setup with extensively trained local annotators.",
"We find that agreement between the two groups increases sublinearly with the number of crowd annotators, achieving up to U = 0 .",
"52 when using ten crowd workers.",
"We release not only our sample data set (consisting of 4909 gold-standard argument component and argument relation annotations over 982 product reviews), but also the source code for the annotation tool itself, which will allow others to produce their own quantityand quality-controlled annotated data sets.",
"While there exists a great diversity of argumentation theories in philosophy and logic (e.g., Toulmin, 2003; Freeman, 2011; Walton et al., 2012), they tend to agree that an argument can be decomposed into various interrelated components.",
"Inspired by Freeman's (2011) theory of the macro-structure of argumentation, Stab and Gurevych (2014) broadly categorize these components as claims (the conclusions that the audience is persuaded to accept or reject), premises (additional information oered to support or attack a given claim), and the major claim (the one central claim that relates all other claims in an argument).",
"Taken together, this can be conceptualized as a graph or tree structure, with vertices representing the argument components (ma-jor claims, claims, and premises) and the directed edges representing the argument relations (support and attack).",
"Stab and Gurevych (2014) annotate a collection of persuasive texts with this scheme, associating each argument component they identify with a contiguous span of text from the document.",
"They report that the annotation process involved several training sessions with their annotators, including collaborative annotation of eight example documents in order to obtain a common understanding of the task.",
"This level of eort is in line with what has been reported for other discourse-level argumentation schemes.",
"For example, annotation studies using the Freemanesque schemes of Peldszus and Stede (2013), Li et al. (2017), Haddadan et al. (2018), and Musi et al. (2018) all required one or more lengthy training sessions guided by argumentation experts and up to six pages of written instructions.",
"Using existing methods to alleviate the knowledge acquisition bottleneck, such as incidental supervision (Roth, 2017), or pre-annotation (Fort and Sagot, 2010), could speed the work of annotators possibly at the risk of introducing a training bias but would not obviate the need for expert training.",
"(In any case, pre-annotation has never, to our knowledge, been successfully applied to hard discourse-level tasks such as annotating argumentation structures.)",
"The complexity of the annotation scheme also seemingly rules out the use of crowdsourcing (Howe, 2006) and gamification (von Ahn, 2006), which are geared towards microtasks that are quick and easy for humans.",
"Though one previous study has decomposed a discourse-level scheme for use with crowdsourcing (Kawahara et al., 2014), the constraints it imposes (fixed-size annotations, maximum document length of three sentences) are too restrictive for argumentation annotation.",
"By contrast, the crowdsourcing approach of Sukhareva et al. (2016), while not concerned with discourse-spanning annotations, employs a few mechanisms that are relevant for our own task.",
"Their approach, intended for the labelling of semantic verb relations, breaks down the annotation work into a series of hierarchical, atomic microtasks.",
"Only those parts of the annotation instructions relevant to the current microtask are shown to the annotator.",
"Furthermore, annotators are encouraged to think of connecting words (specifically, generally speaking, in other words, etc.) that justify their relation annotations.",
"As described in the following section, we adapt and extend these mechanisms for our own annotation method.",
"Our approach to mitigating the knowledge acquisition problem is an iterative procedure by which annotators apply a distinct subset of the annotation scheme at each step.",
"In this manner, complex discourse-level annotations are built up piecemeal in simple steps.",
"The iterative annotation process is supported by an online JavaScript-based interface.",
"Taken together, this allows the Stab and Gurevych (2014) annotation scheme to be applied even by untrained annotators in a crowdsourcing setup.",
"In the first step of the annotation process, annotators are presented with the complete argumentative text and asked to select the one phrase (i.e., an arbitrary sequence of words) that best represents the major claim, or else to indicate that there is no such passage.",
"1 In the second step, annotators are presented once again with the full argument, but with its major claim marked.",
"2 The annotators then select the claimsthat is, all phrases that directly speak to the major claim, as well as whether those passages support or attack the major claimor else 1 If the user indicates in any step that there is no text span corresponding to the argument component type, we ask them to perform a short alternative task.",
"This is to prevent faithless workers from taking the easy way out of the annotation task, but also to collect further annotations of interest to us.",
"2 In the second and third steps, the marked annotation is not necessarily the one applied by the annotator in the previous step.",
"In fact, as we explain below, in our study we source all annotations from a given step simultaneously, distill them into a gold standard, and mark these gold-standard annotations for the next step.",
"With this setup, there is no need for a given annotator to participate in all three steps.",
"indicate that there are no such text spans.",
"In the third step, annotators see the full argument with one of its claims marked.",
"As with the previous step, annotators select text spans corresponding to the premises of the claim and indicate each premises's stance; they also have the option of reporting that the claim has no premises.",
"The annotation tool automatically enforces the restrictions that annotations must be contiguous, must begin and end on a word boundary, and cannot overlap with their siblings or ancestors.",
"Crucially, the instructions given to annotators at each step of the process do not attempt to explain the entire annotation scheme but rather describe only the immediate annotation task in layman's terms.",
"Furthermore, the tool attempts to make this task more intuitive for users by framing the second and third steps as a sentence completion task.",
"An example of this is the interface for annotating claims (see Fig. 1).",
"The full argumentative text (in this case, a product review) is shown on the left half of the screen, with the major claim marked, and we separately show a copy of the major claim on the right half of the screen.",
"The user is instructed to extend the major claim with additional supporting or attacking information by appending a because or but clause, respectively.",
"The user does this by pressing the but or because button below the major claim and then highlighting a sentence or phrase from the review.",
"To assess the suitability of our annotation procedure, we applied it in a crowdsourcing setup.",
"Measuring interannotator agreement for crowdsourced annotations is problematic, however, because there are typically a huge number of annotators, most of whom annotate only a tiny fraction of the data set.",
"To gauge the reliability of our crowdsourced annotations, we instead conducted an experiment that compared them to those produced by expert-trained annotators.",
"For the experiment, we randomly selected 40 Amazon product reviews from the McAuley et al. (2015) data setfour from each of ten product categories.",
"Each review was annotated for major claims by ten crowd workers; all 40 reviews were also annotated for major claims by a fixed group of three locally recruited annotators trained by argumentation experts.",
"3 We then converted the 3 We engaged US-based workers from Amazon Mechanical annotated reviews to BIO tokens (Ramshaw and Marcus, 1995) and applied the annotation aggre-gation/denoising tool MACE (Hovy et al., 2013) to select at most two gold-standard major-claim annotations per review, one from the crowd ( crowd ) and one from the trained annotators ( train ).",
"4 We then compared the crowd and train gold standards, one review at a time, using Krippendor's (1995) U , a unitizing measure that considers the token-level boundaries of the text spans marked by each annotator.",
"We repeated this process to obtain and evaluate crowd and train claim annotations on the train major claims, and then again for crowd and train premise annotations on the train claims.",
"Note that in the gold standards for some reviews, there may be no major claim, no claims associated with the major claim, and/or no premises associated with a given claim.",
"In many cases this is because the annotators generally agreed that such argument components were not present in the text.",
"However, in other cases the various annotators did identify such argument components, but the agreement among them was too low for MACE to output a gold-standard annotation.",
"A quandary therefore arises when deciding how to treat reviews where neither the crowd nor the train gold standard contains a given type of argument component.",
"There is no (easy) way of determining from the MACE output whether missing annotations are due to agreement or disagreement, and even if this information were available, it is not clear how it could be incorporated into the calculation of U .",
"For this reason, we apply two dierent strategies for handling missing annotations, and provide separate U calculations for each.",
"The first strategy, skip , disregards missing annotations, excluding them from the mean agreement calculation.",
"The second strategy, agree , treats missing annotations as total agreement ( U = 1).",
"5 When using all ten crowdsourced annotations per review and the agree strategy, we achieved mean U scores of 0.4104, 0.5231, and 0.4385 for major claims, claims, and premises, respectively.",
"With the skip strategy, the respective scores are 0.4104, 0.4845, and 0.2201.",
"As expected, these scores Turk at the US federal minimum wage of $7.25/hour.",
"Our expert-trained annotators were salaried research sta whose equivalent hourly rate was three to five times higher.",
"4 MACE accepts a threshold value that is used to discard instances that cannot be confidently assigned a gold label; we set this to 0.9.",
"5 It is not possible to treat the missing annotations as total disagreement because per Krippendor (1995), U has no concept of this; there is no lowest disagreement score.",
"6 However, they are broadly comparable to interannotator agreement scores reported in similar (and in some cases, even simpler) discourse-level argument annotation studies with expert-trained annotators, such as Aha-roni et al. (2014) ( = 0 . 4), Musi et al. (2018) ( = 0 . 296), and Li et al. (2017) ( U = 0 . 2452).",
"To measure how the number of crowd annotations impacts reliability, we performed an ablation study where we iteratively removed one crowd annotation at random from each review and repeated the MACE distillation and U calculation.",
"The study was repeated 100 times and the resulting U scores averaged.",
"The results are shown in Fig. 2, which plots the average U scores for major claims, claims, and premises when using one to ten crowd annotations per review.",
"The plots are shown as error bars, where the top of the bar is the average agree score and the bottom is the average skip score.",
"Reliability scores start to be uniformly positive with three annotations, with agreement for major claims and premises plateauing around seven annotations.",
"The dierence between the agree and skip scores is sizable only for premises.",
"Having satisfied ourselves that our method can produce reliable annotations via crowdsourcing, we applied it to a much larger subset of McAuley et al. (2015).",
"The raw data consists of 982 English product reviews randomly sampled from the same ten product categories used in our evaluation study.",
"6 Apart from the fact that we used untrained annotators, the dierence in agreement may also be due in part to our use of online user-generated content as opposed to student essays.",
"For each argument component type in a review, we sourced annotations from five crowd workers, considering this to be an acceptable trade-o between annotation quality and cost.",
"The MACE-produced gold standard contains 4909 annotations (937 major claims, 1134 claims, 852 premises, and 1986 argument relations).",
"Our data set is distinguished from the review corpora of Garca Villalba and Saint-Dizier (2012) and Wyner et al. (2012) in that it is much larger, covers a broader range of product types, and is freely released under the CC BY 4.0 licence.",
"It is comparable in size to but broader in scope than the Chinese-language hotel review corpus of Li et al. (2017).",
"Our data is distributed 7 as a set of XML Metadata Interchange (XMI) files, one per review, containing stand-o argument annotations that cross-reference the original texts from McAuley et al. (2015).",
"(Be-cause the original review texts are not available under a free licence, we do not include them in our distribution, but we provide a script for extracting them from the original corpus and merging them into our XMIs.)",
"Also included is the JavaScript source for our annotation tool, as well as the Java source for preprocessing the raw data and postprocessing the annotations with MACE.",
"This code can be used to crowdsource further annotated data sets using the McAuley et al. (2015) data at the desired level of quality.",
"It could also be adapted to work with other raw corpora and other Freemanesque annotation schemes.",
"argumen-7 https://github.com/UKPLab/naacl2019-argument-annotations",
"tation annotations of Stab and Gurevych (2014), which may be adaptable to other annotation schemes based on Freeman's (2011) notion of argumentation.",
"Our analysis shows that crowdsourced annotations obtained with our method yield substantial agreement with those obtained, with much greater eort, by expert-trained annotators.",
"We have used our method to quickly and cheaply produce a large, argument-annotated data set of product reviews, which we freely release, along with the source code to our annotation interface and processing tools.",
"Unlike with flat, context-free argument data such as that of Stab et al. (2018), training on our annotations would conceivably permit the identification not just of isolated argument components but of more complex argument structures.",
"Our resources may also be of use for qualitative research on the linguistic features and rhetorical mechanisms of argumentative text (e.g., Peldszus and Stede, 2016).",
"For future work, we are investigating alternatives to MACE, which was designed for categorical annotations rather than the sequence labelling of our task.",
"In particular, we are looking into the Bayesian method of Simpson and Gurevych (2018), which takes advantage of the sequential dependencies between BIO tags, and works more robustly with noisy, subjective data such as ours.",
"The authors thank Johannes Daxenberger and Christian Stab for many insightful discussions.",
"This work has been supported by the German Federal Ministry of Education and Research (BMBF) under the promotional references 03VP02540 (Ar-gumenText) and 01UG1816B (CEDIFOR), and by the DFG-funded research training group Adaptive Preparation of Information form Heterogeneous Sources (AIPHES, GRK 1994/1)."
] | [
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"result",
"method",
"method",
"abstain",
"objective",
"method",
"other",
"other"
] |
[
"Open Information Extraction systems extract (subject text, relation text, object text) triples from raw text.",
"Some triples are textual versions of facts, i.e., non-canonicalized mentions of entities and relations.",
"In this paper, we investigate whether it is possible to infer new facts directly from the open knowledge graph without any canonicalization or any supervision from curated knowledge.",
"For this purpose, we propose the open link prediction task, i.e., predicting test facts by completing (sub-ject text, relation text, ?) questions.",
"An evaluation in such a setup raises the question if a correct prediction is actually a new fact that was induced by reasoning over the open knowledge graph or if it can be trivially explained.",
"For example, facts can appear in different paraphrased textual variants, which can lead to test leakage.",
"To this end, we propose an evaluation protocol and a methodology for creating the open link prediction benchmark OLPBENCH .",
"We performed experiments with a prototypical knowledge graph embedding model for open link prediction.",
"While the task is very challenging, our results suggests that it is possible to predict genuinely new facts, which can not be trivially explained.",
"A knowledge graph (KG) (Hayes-Roth, 1983) is a set of (subject, relation, object)-triples, where the subject and object correspond to vertices, and relations to labeled edges.",
"In curated KGs, each triple is fully disambiguated against a fixed vocabulary of entities 1 and relations.",
"An application for KGs, for example, is the problem of drug discovery based on bio-medical knowledge (Mohamed et al., 2019).",
"The construction of a curated bio-medical KG, which is required for 1 For brevity, entities denotes both entities (e.g. Prince) and concepts (e.g. musician) throughout the paper.",
"such an approach, is challenging and constrained by the available amount of human effort and domain expertise.",
"Many tools that could assist humans in KG construction (e.g., an entity linker) need a KG to begin with.",
"Moreover, current methods for KG construction often rely on the rich structure of Wikipedia, such as links and infoboxes, which are not available for every domain.",
"Therefore, we ask if it is possible to make predictions about, for example, new drug applications from raw text without the intermediate step of KG construction.",
"Open information extraction systems (OIE) (Et-zioni et al., 2011) automatically extract (sub-ject text, relation text, object text) -triples from unstructured data such as text.",
"We can view OIE data as an open knowledge graph (OKG) (Galarraga et al., 2014), in which vertices correspond to mentions of entities and edges to open relations (see Fig. 1).",
"Our overarching interest is whether and how we can reason over an OKG without any canonicalization and without any supervision on its latent factual knowledge.",
"The focus of this study are the challenges of benchmarking the inference abilities of models in such a setup.",
"A common task that requires reasoning over a Open Link Prediction Link Prediction NBC ?",
"KG is link prediction (LP).",
"The goal of LP is to predict missing facts in a KG.",
"In general, LP is defined as answering questions such as (NBC, headquarterIn, ?) or (?, headquarterIn, NewYorkCity) ; see Fig. 2a.",
"In OKGs, we define open link prediction (OLP) as follows: Given an OKG and a question consisting of an entity mention and an open relation, predict mentions as answers.",
"A predicted mention is correct if it is a mention of the correct answer entity.",
"For example, given the question (NBC-TV, has office in, ?) , correct answers include NYC and New York ; see Fig. 2b).",
"To evaluate LP performance, the LP model is trained on known facts and evaluated to predict unknown facts, i.e., facts not seen during training.",
"A simple but problematic way to transfer this approach to OKGs is to sample a set of evaluation triples from the OKG and to use the remaining part of the OKG for training.",
"To see why this approach is problematic, consider the test triple (NBC-TV, has office in, New York) and suppose that the triple (NBC, has headquarter in, NYC) is also part of the OKG.",
"The latter triple essentially leaks the test fact.",
"If we do not remove such facts from the training data, a successful models only paraphrases known facts but does not perform reasoning, i.e., does not predict genuinely new facts.",
"Furthermore, we also want to quantify if there are other trivial explanations for the prediction of an evaluation fact.",
"For example, how much can be predicted with simple popularity statistics, i.e., only the mention, e.g. (NBC-TV, ?) , or only the relation, e.g. (has office in, ?) .",
"Such non-relational information also does not require reasoning over the graph.",
"To experimentally explore whether it is possible to predict new facts, we focus on knowledge graph embedding (KGE) models (Nickel et al., 2016), which have been applied successfully to LP in KGs.",
"Such models can be easily extended to handle the surface forms of mentions and open relations.",
"Our contributions are as follows: We propose the OLP task, an OLP evaluation protocol, and a method to create an OLP benchmark dataset.",
"Using the latter method, we created a large OLP benchmark called OLPBENCH , which was derived from the state-of-the-art OIE corpus OPIEC (Gash-teovski et al., 2019).",
"OLPBENCH contains 30 M open triples, 1 M distinct open relations and 2 .",
"5 M distinct mentions of approximately 800 K entities.",
"We investigate the effect of paraphrasing and nonrelational information on the performance of a prototypical KGE model for OLP.",
"We also investigate the influence of entity knowledge during model selection with different types of validation data.",
"For training KGE models on such large datasets, we describe an efficient training method.",
"In our experiments, we found the OLP task and OLPBENCH to be very challenging.",
"Still, the KGE model we considered was able to predict genuinely new facts.",
"We also show that paraphrasing and non-relational information can indeed dilute performance evaluation, but can be remedied by appropriate dataset construction and experimental settings.",
"OKGs can be constructed in a fully automatic way.",
"They are open in that they do not require a vocabulary of entities and relations.",
"For this reason, they can capture more information than curated KGs.",
"For example, different entity mentions can refer to different versions of an entity at different points of time, e.g., Senator Barack Obama and Pres-ident Barack Obama .",
"Similarly, relations may be of varying specificity: headquarterIn may be expressed directly by open relations such as be based in or operate from but may also be implied by relocated their offices to .",
"In contrast to KGs, OKGs contain rich conceptual knowledge.",
"For example, the triple (a class action lawsuit, is brought by, shareholders) does not directly encode entity knowledge, although it does provide information about entities that link to a class action lawsuit or shareholders .",
"OKGs tend to be noisier and the factual knowledge is less certain than in a KG, however.",
"They NBC-TV Marseille Los Angeles has office in ?",
"can not directly replace KGs.",
"OKGs have mostly been used as a weak augmentation to KGs, e.g., to infer new unseen entities or to aid link prediction (see App. A for a comprehensive discussion of related work).",
"Much of prior work that solely leverages OKGs without a reference KGand therein is closest to our workfocused on canonicalization and left inference as a follow-up step (Cohen et al., 2000, inter alia).",
"In contrast, we propose to evaluate inference in OKGs with OLP directly.",
"The open link prediction task is based on the link prediction task for KGs (Nickel et al., 2016), which we describe first.",
"Let E be a set of entities, R be a set of relations, and T E R E be a knowledge graph.",
"Consider questions of the form q h = (? , k, j ) or q t = ( i, k, ?) , where i, j E is a head and tail entity, respectively, and k R is a relation.",
"The link prediction problem is to provide answers that are correct but not yet present in T .",
"In OKGs, only mentions of entities and open relations are observed.",
"We model each entity mention and each open relation as a non-empty sequence of tokens from some vocabulary V (e.g., a set of words).",
"Denote by M = V + the set of all such sequences and observe that M is unbounded.",
"An open knowledge graph T MMM consists of triples of form ( i, k, j ) , where i, j M are head and tail entity mentions ,",
"resp., and k M is an open relation .",
"Note that we overload notation for readability: i , j , and k refer to entity mentions and open relations in OKGs, but to disambiguated entities and relations in KGs.",
"The intended meaning will always be clear from the context.",
"We denote by M ( E ) and M ( R ) the sets of entity and relations present in T , respectively.",
"The open link prediction task is to predict new and correct answers to questions ( i, k, ?) or (? , k, j ) .",
"Answers are taken from M ( E ) , whereas questions may refer to arbitrary mentions of entities and open relations from M .",
"For example, for the question ( NBC-TV , has office in , ?), we expect an answer from the set of mentions { New York , NYC , . . . } of the entity NewYorkCity .",
"Informally, an answer ( i, k, j ) is correct if there is a correct triple ( e 1 , r, e 2 ) , where e 1 and e 2 are entities and r is a relation, such that i , j , and k are mentions of e 1 , e 2 , and r , respectively.",
"To describe our proposed evaluation protocol, we first revisit the most commonly used methodology to evaluate link prediction methods for KGs, i.e., the entity-ranking protocol (Bordes et al., 2013).",
"Then, we discuss its adaptation to OLP, which we call the mention-ranking protocol (see Fig. 3).",
"KGs and entity ranking.",
"For each triple z = ( i, k, j ) in the evaluation data, a link prediction model ranks the answers for two questions, q t ( z ) = ( i, k, ?) and q h ( z ) = (? , k, j ) .",
"The model is evaluated based on the ranks of the correct entities j and i ; this setting is called raw .",
"When true answers for q t ( z ) and q h ( z ) other than j and i are filtered from the rankings, then the setting is called filtered .",
"OKGs and mention ranking.",
"In OLP, the model predicts a ranked list of mentions.",
"But questions might have multiple equivalent true answers, i.e., answers that refer to the same entity but use different mentions.",
"Our evaluation metrics are based on the highest rank of a correct answer mention in the ranking.",
"For the filtered setting, the mentions of known answer entities other than the evaluated entity are filtered from the ranking.",
"This mention-ranking protocol thus uses knowledge of alternative mentions of the entity in the evaluation triple to obtain a suitable ranking.",
"The mention-ranking protocol therefore requires",
"(i) ground truth annotations for the entity mentions in the head and tail of the evaluation data, and",
"(ii) a comprehensive set of mentions for these entities.",
"An OLP benchmark should enable us to evaluate a model's capability to predict genuinely new facts, i.e., facts can not be trivially derived.",
"Due to the na-ture of OKGs, paraphrasing of facts may leak facts from validation and test data into training, making the prediction of such evaluation facts trivial.",
"Nevertheless, the creation of training and validation data should require as little human effort as possible so that the methodology can be readily applied to new domains.",
"Our mention-ranking protocol uses knowledge about entities for disambiguation (of the evaluation data, not the training data), however, which requires human effort to create.",
"We investigate experimentally to what extent this entity knowledge is necessary for model selection and, in turn, how much manual effort is required to create a suitable validation dataset.",
"In the following, we describe the source dataset of OLPBENCH and discuss how we addressed the points above to create evaluation and training data.",
"OLPBENCH is based on OPIEC (Gashteovski et al., 2019), a recently published dataset of OIE triples that were extracted from the text of English Wikipedia with the state-of-the-art OIE system MinIE (Gashteovski et al., 2017).",
"We used a subset of 30M distinct triples, comprised of 2.5M entity mentions and 1M open relations.",
"In 1.25M of these triples, the subject and the object contained a Wikipedia link.",
"Fig. 4 shows how a Wikipedia link is used to disambiguate a triple's subject and object mentions.",
"Tab.",
"1 shows an excerpt from the unlinked and linked triples.",
"For the evaluation protocol, we collected a dictionary, where each entity Was the second ship of the United States Navy to be named for William Conway, who distinguished himself during the Civil War.",
"From the source dataset, we created validation and test data with the following requirements:",
"Data quality.",
"The evaluation data should be challenging, and noise should be limited as much as possible.",
"We chose a pragmatic and easy heuristic: we did not consider short relations with less than three tokens as candidates for sampling evaluation data.",
"This decision was based on the following observations:",
"(i) Due to the OPIEC's extractions, short relationse.g. (kerry s. walters, is, professor emeritus) are often subsumed by longer relationse.g. (kerry s. walters, is professor emeritus of, philosophy) , which would always lead to leakage from the longer relation to the shorter relation.",
"(ii) Longer relations are less likely to be easily captured by simple patterns that are already successfully used by KG construction methods, e.g. (elizabeth of hungary, is the last member of, the house of arpad) .",
"We conjecture that long relations are more interesting for evaluation to measure progress in reasoning with OKG data.",
"(iii) The automatically extracted entity annotations were slightly noisier for short relations; e.g., (marc anthony, is singer) had the object entity annotation SinFrenos .",
"Human effort for data creation.",
"The mention-ranking protocol uses knowledge about entities for disambiguation.",
"We want to experimentally quantify the influence of this entity knowledge on model selection, i.e., whether entity knowledge is necessary to find a good model.",
"If so, human expertise is necessary to create the validation data.",
"While our goal is to require almost no human domain expertise to learn a good model, the size of validation data is much smaller than the size of the training data.",
"Therefore, this effortif helpfulmay be subject relation object subject mentions object mentions U n li n ke d conway has plot henry s.",
"Entity links are discarded, i.e., the mention-ranking protocol cannot be used for validation.",
"To evaluate LP models for KGs, evaluation facts are generated by sampling from the KG.",
"Given an evaluation triple ( i, k, j ) , the simplest action to avoid leakage from the training data is to remove only this evaluation triple from training.",
"For KGs, it was observed this simple approach is not satisfactory in that evaluation answers may still leak and thus can be trivially inferred (Toutanova et al., 2015; Dettmers et al., 2018).",
"For example, an evaluation triple ( a, siblingOf, b ) can be trivially answered with the training triple ( b, siblingOf, a ) .",
"In OKGs, paraphrases of relations pose additional sources of leakage.",
"For example, the relations is in and located in may contain many of the same entity pairs.",
"For evaluation triple ( i, k, j ) , such leakage can be prevented by removing any other relation between i and j from the training data.",
"However, individual tokens in the arguments or relations may also cause leakage.",
"For example, information about test triple ( NBC-TV, has office in, NYC ) is leaked by triples such as ( NBC Television, has NYC offices in, Rockefeller Plaza ) even though it has different arguments.",
"Fig. 5 visualizes this example.",
"We use three levels of leakage removal from training: SIMPLE , BASIC , and THOROUGH .",
"To match evaluation triple ( i, k, j ) with training triples, we ignored word order and stopwords.",
"SIMPLE removal.",
"Only the triple ( i, k, j ) is removed.",
"Triples with alternative mentions for i or j are kept.",
"THOROUGH removal.",
"Additionally to BASIC removal, we also remove triples from training matched by the following patterns.",
"The patterns are explained with the example (J. Smith, is defender of, Liverpool) :",
"(a) ( i, , j ) and ( j, , i ) .",
"E.g., matches (J. Smith, is player of, Liverpool) .",
"(b) ( i, k + j, ) and ( , k + i, j ) .",
"2 E.g., matches (J. Smith, is Liverpool's defender on, Satur-day) .",
"(c) ( i + k + j, , ) and ( , , i + k + j ) .",
"E.g., matches (Liverpool defender J. Smith, kicked, the ball) .",
"For OLPBENCH , THOROUGH removed 196 , 717 more triples from the OKG than BASIC .",
"Note that this yields three different training data sets.",
"KG embedding (KGE) models have been successfully applied for LP in KGs, and they can be easily extended to handle surface forms, i.e., mentions and open relations.",
"We briefly describe KGE models and their extension.",
"Knowledge Graph Embedding (KGE) model.",
"A KGE model (Nickel et al., 2016) associates an embedding with each entity and each relation.",
"The embeddings are dense vector representations that are learned with an LP objective.",
"They are used to compute a KGE model-specific score s ( i, k, j ) for a triple ( i, k, j ) ; the goal is to predict high scores for true triples and low scores for wrong triples.",
"KGE model with composition.",
"For our experiments, we considered composition functions to create entity and relation representations from the tokens of the surface form.",
"Such an approach has been used, for example, by Toutanova et al. (2015) to produce open relation embedding via a CNN.",
"A model that reads the tokens of mentions and open relations can, in principle, handle any mention and open relation as long as the tokens have been observed during training.",
"We use a general model architecture that combines a relational model and a composition func-( Jamie Carragher, is defender of, Liverpool ) mention/relation tokens token embeddings mention/relation embeddings score for triple Figure 6: KGE model with composition.",
"tion, see Fig. 6.",
"Formally, let V ( E ) + be the set of non-empty token sequences over the token vocabulary V ( E ) of entity mentions.",
"We denote by d, o N + the size of the embeddings of entities and relations.",
"We first embed each entity mention into a continuous vector space via an entity mention embedding function f : V ( E ) + R d .",
"Similarly, each open relation is embedded into a continuous vector space via a relation embedding function g : V ( R ) + R o .",
"The embeddings are then fed into a relational scoring function RM : R d R o R d R .",
"Given a triple ( i, k, j ) , where i, j V ( E ) + and k V ( R ) + , our model computes the final score as s ( i, k, j ) = RM ( f ( i ) , g ( k ) , f ( j ) ) .",
"In our experimental study, we investigated whether a simple prototypical OLP model can predict genuinely new facts or if many successful predictions can be trivially explained by leakage or nonrelational information.",
"Our goal was to study the effectiveness and necessity of the mention-ranking protocol and leakage removal, and how much human effort is necessary to create suitable validation data.",
"Finally, we inspected data and model quality.",
"We first describe the models and their training, then the performance metrics, and finally the evaluation.",
"In our experimental results, model performance dropped by 25% with THOROUGH leakage removal so that leakage due to paraphrasing is indeed a concern.",
"We also implemented two diagnostic models that use non-relational information (only parts of a triple) to predict answers.",
"These models reached 2025% of the prototypical model's performance, which indicates that relational modelling is important.",
"In our quality and error analysis, we found that at least 74% of the prediction errors were not due to noisy data.",
"A majority of incorrectly predicted entity mentions have a type similar to the one of the true entity.",
"Prototypical model.",
"We use COMPLEX (Trouil-lon et al., 2016) as relational model, which is an efficient bilinear model and has shown state-of-the-art results.",
"For the composition functions f and g , we used an LSTM (Hochreiter and Schmidhuber, 1997) with one layer and the hidden size equivalent to the token embedding size.",
"We call this model COMPLEX-LSTM.",
"3 Diagnostic models.",
"To expose potential biases in the data, we employ two diagnostic models to discover how many questions can simply be answered without looking at the whole question, i.e., by exploiting non-relational information .",
"Given question ( i, k, ?) , the model PREDICT-WITH-REL considers ( r, ?) for scoring.",
"E.g., for question (Jamie Carragher, is defender of, ?) , we actually ask (is defender of, ?) .",
"This is likely to work reasonably for relations that are specific about the potential answer entities; e.g., predicting popular football clubs for (is defender of, ?) .",
"The model uses scoring functions s t : R o R d R and s h : R d R o R for questions ( i, k, ?) and (? , k, j ) respectively: s t ( k, e ) = g ( k ) T f ( j ) , s h ( i, k ) = f ( i ) T g ( k ) Likewise, the PREDICT-WITH-ENT model ignores the relation by computing a score for pair ( i, j ) .",
"We use s e ( i, j ) = f ( i ) T f ( j ) Training.",
"Performance metrics.",
"For evaluating a model's predictions, we use the ranking metrics mean reciprocal rank (MRR) and HITS@k.",
"MRR is sensitive to the top-3 ranks with rapidly decaying reward, 3 In a preliminary study, we investigated COMPLEX , ANALOGY , DISTMULT and RESCAL as relational models.",
"COMPLEX was the most efficient and best performing model.",
"For composition functions, we also investigated unigram pooling, bi-gram pooling with CNNs, self-attention and LSTMs.",
"Here LSTMs worked well consistently.",
"See App.",
"E for additional results.",
"while HITS@ k equally rewards correct answers in the topk ranks.",
"See App.",
"D for a more formal definition of MRR and HITS@k.",
"The ranks are based on mention ranking for VALID-LINKED and TEST and on entity-ranking (treating distinct mentions as distinct entities) for VALID-ALL and VALID-MENTION .",
"Influence of leakage.",
"In Tab.",
"2, we observed that BASIC leakage removal of evaluation data lowers the performance of all models considerably in contrast to the SIMPLE leakage removal.",
"With the THOROUGH leakage removal, performace drops further; e.g., HITS@50 performance dropped by 25% from SIMPLE .",
"This confirms our conjecture that leakage can trivially explain some successful predictions.",
"Most predictions, however, cannot be explained by paraphrasing leakage.",
"Influence of non-relational information.",
"In Tab.",
"2, we see that PREDICT-WITH-ENT , which essentially learns popularity statistics between entity mentions, has no success on the evaluation data.",
"However, PREDICT-WITH-REL reaches 20 25% of HITS@50 performance of COMPLEXLSTM by simply predicting popular mentions for a relation, even in the THOROUGH setting.",
"Effectiveness of mention-ranking.",
"Tab.",
"3 shows validation results for the three types of validation data for COMPLEX-LSTM and THOROUGH removal.",
"The evaluation protocol has access to alternative mentions only in VALID-LINKED , but not in VALID-ALL and VALID-MENTION .",
"Clearly, using VALID-LINKED results in higher metrics when models associate different mentions to an answer entity.",
"Influence of model selection.",
"The THOROUGH block of Tab.",
"2 shows the results for model selection based on VALID-ALL , VALID-MENTION or VALID-LINKED .",
"In VALID-ALL , many triples contain common nouns instead of entity mentions, while in VALID-MENTION or VALID-LINKED triples have entity mentions in both arguments.",
"Model selection based on VALID-ALL clearly picked a weaker model than model selection based on VALID-LINKED , i.e., it led to a drop of 35% of HITS@50 performance.",
"However, there is no improvement when we pick a model based on VALIDLINKED versus VALID-MENTION .",
"Thus, computing the MRR using alternative entity mentions did not improve model selection, even thoughas Tab.",
"3 showsthe mention-ranking protocol gives more credit when alternative mentions are ranked higher.",
"Our results suggest that it may suffice to use validation data that contains entity mentions but avoid costly entity disambiguation.",
"Overall performance.",
"In Tab.",
"2 we observed that performance numbers seem generally low.",
"For comparison, the HITS@10 of COMPLEX on FB15k-237a standard evaluation dataset for LP in curated KGslies between 45% and 55%.",
"We conjecture that this drop may be due to:",
"(i) The Leakage Model Removal Model Selection MRR HITS@1 HITS@10 HITS@50 COMPLEX-LSTM ALL 2.9 1.8 5.0 8.9 THOROUGHCOMPLEX-LSTM MENTION 3.6 2.0 6.5 13.0 COMPLEX-LSTM LINKED 4.2 2.3 7.5 14.9 Table 3: Validation results.",
"level of uncertainty and noise in the training data, i.e., uninformative or even misleading triples in OKGs (Gashteovski et al., 2019).",
"(ii) Our evaluation data is mostly from the more challenging long tail.",
"(iii) OKGs might be fragmented, thus inhibiting information flow.",
"Also, note that the removal of evaluation data from training removes evidence for the evaluated long-tail entities.",
"(iv) Naturally, in LP, we do not know all the true answers to questions.",
"Thus, the filtered rank might still contain many true predictions.",
"In OLP, we expect this effect to be even stronger, i.e., the filtered ranking metrics are lower than in the KG setting.",
"Still, like in KG evaluation, with a large enough test set, the metrics allow for model comparisons.",
"Model and data errors.",
"We inspected predictions for VALID-LINKED from COMPLEX-LSTM trained on THOROUGH .",
"We sampled 100 prediction errors, i.e., triples for which no correct predicted mention appeared in the filtered top-50 rank.",
"We classified prediction errors by inspecting the top-3 ranks and judged their consistency.",
"We classified triple quality judging the whole triple.",
"We counted an error as correct sense / wrong entity , when the top-ranked mentions are semantically sensible, i.e. for (Irving Azoff, was head of, ?) the correct answer would be MCA Records , but the model predicted other record companies.",
"We counted an error as wrong sense whenfor the same examplethe model mostly consistently predicted other companies or music bands, but not other record companies.",
"If the predictions are inconsistent, we counted the error as noise .",
"An additional quality assessment is the number of wrong triples caused by extraction errors in OPIEC, e.g., (Finland, is the western part of, the balkan peninsula) , (William Macaskill, is vice-president of, giving) , or errors in alternative mentions.",
"We also looked for generic mentions in the evaluation data.",
"Such mentions contain mostly conceptual knowledge like in (computer science, had backgrounds in, mathematics) .",
"Other generic triples, like (Patrick E., joined the team in, the season) , have conceptual meaning, but miss context to disambiguate the season .",
"The results in Tab.",
"4 suggest that the low performance in the experiments is not due to noisy evaluation data.",
"74% of the examined prediction errors on VALID-LINKED contained correct, nongeneric facts.",
"The shown model errors raise the question of whether there is enough evidence in the data to make better predictions.",
"We proposed the OLP task and a method to create an OLP benchmark.",
"We created the large OLP benchmark OLPBENCH , which will be made publicly available 4 .",
"We investigated the effect of leakage of evaluation facts, non-relational information, and entity-knowledge during model selection using a prototypical open link prediction model.",
"Our results indicate that most predicted true facts are genuinely new.",
"The first author would like to gratefully thank the NVIDIA Corporation for the donation of a TITAN Xp GPU that was used in this research."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"other"
] |
[
"Most previous studies on bridging anaphora resolution (Poesio et al., 2004; Hou et al., 2013b; Hou, 2018a) use the pairwise model to tackle the problem and assume that the gold mention information is given.",
"In this paper, we cast bridging anaphora resolution as question answering based on context.",
"This allows us to find the antecedent for a given anaphor without knowing any gold mention information (except the anaphor itself).",
"We present a question answering framework ( BARQA ) for this task, which leverages the power of transfer learning.",
"Furthermore, we propose a novel method to generate a large amount of quasi-bridging training data.",
"We show that our model pre-trained on this dataset and fine-tuned on a small amount of in-domain dataset achieves new state-of-the-art results for bridging anaphora resolution on two bridging corpora (ISNotes (Markert et al., 2012) and BASHI (R osiger, 2018)).",
"Anaphora accounts for text cohesion and is crucial for text understanding.",
"An anaphor is a noun phrase (NP) that usually refers back to the same or a different entity (the antecedent) in text.",
"Anaphora resolution is the task to determine the antecedent for a given anaphor.",
"While direct anaphora resolution attracts a lot of attention in the NLP community recently, such as Winograd Schema Challenge (Rahman and Ng, 2012; Opitz and Frank, 2018; Kocijan et al., 2019), indirect anaphora resolution or bridging anaphora resolution is less well studied.",
"In this paper, we focus on bridging anaphora resolution where bridging anaphors and their antecedents are linked via various lexico-semantic, frame or encyclopedic relations.",
"Following Hou et al. (2013b) and Rosiger et al. (2018), we mainly consider referential bridging in which bridging anaphors are truly anaphoric and bridging relations are context-dependent.",
"In Example 1 1 , both her building and buildings with substantial damage are plausible antecedent candidates for the bridging anaphor residents based on lexical semantics.",
"In order to find the antecedent ( buildings with substantial damage ), we have to take the meaning of the broader discourse context into account.",
"(1) In post-earthquake parlance, her building is a red.",
"After being inspected, buildings with substantial damage were color-coded.",
"Green allowed residents to re-enter; yellow allowed limited access ; red allowed residents one last entry to gather everything they could within 15 minutes.",
"Most previous studies on bridging anaphora resolution (Poesio et al., 2004; Lassalle and Denis, 2011; Hou et al., 2013b; Hou, 2018a) tackle the problem using the pairwise model and assume that the gold mention information is given.",
"Most work (Poesio et al., 2004; Lassalle and Denis, 2011; Hou et al., 2013b) uses syntactic patterns to measure semantic relatedness between the head nouns of an anaphor and its antecedent.",
"Hou (2018a) proposes a simple deterministic algorithm that also considers the semantics of modifications for head nouns.",
"These approaches, however, do not take the broader context outside of noun phrases (i.e., anaphors and antecedent candidates) into account and often fail to resolve context-dependent bridging anaphors as demonstrated in Example 1.",
"Resolving bridging anaphors requires context-dependent text understanding.",
"Recently, Gardner et al. (2019) argue that question answering (QA) is a natural format to model tasks that require question understanding.",
"In this paper, we cast bridging anaphora resolution as question answering based 1 All examples, if not specified otherwise, are from ISNotes (Markert et al., 2012).",
"Bridging anaphors are typed in boldface, antecedents in italics throughout this paper.",
"on context.",
"We develop a QA system ( BARQA ) for the task based on BERT (Devlin et al., 2019).",
"Given a context as shown in Example 1, we first rephrase every anaphor as a question, such as residents of what? .",
"By answering the question, the system then identifies the span of the antecedent from the context.",
"Compared to the pairwise model, our QA system does not require the gold or system mention information as the antecedent candidates.",
"In addition, this framework allows us to integrate context outside of NPs when choosing antecedents for bridging anaphors.",
"For instance, Green and damage were color-coded are among the top predicted answers for the above question.",
"Different from coreference resolution, there are no large-scale corpora available for referential bridging resolution due to its complexity.",
"In this paper we propose a new method to generate a large amount of quasi-bridging training data from the automatically parsed Gigaword corpus (Parker et al., 2011; Napoles et al., 2012).",
"We demonstrate that our quasi-bridging training data is a better pre-training choice for bridging anaphora resolution compared to the SQuAD corpus (Rajpurkar et al., 2016).",
"Moreover, we show that our model pre-trained on this dataset and fine-tuned on a small amount of in-domain dataset achieves new state-of-the-art results for bridging anaphora resolution on two bridging corpora (i.e., ISNotes (Markert et al., 2012) and BASHI (Rosiger, 2018)).",
"To summarize, the main contributions of our work are: (1) we formalize bridging anaphora resolution as a question answering problem and propose a QA model to solve the task; (2) we explore a new method to generate a large amount of quasi-bridging training dataset and demonstrate its value for bridging anaphora resolution; and (3) we carefully carry out a series of experiments on two referential bridging corpora and provide some error analysis to verify the effectiveness of our QA model to resolve the context-dependent bridging anaphors in ISNotes.",
"We release the code and all experimental datasets at https://github.",
"com/IBM/bridging-resolution .",
"Bridging Anaphora Resolution.",
"Since the '90s, the empirical corpus studies related to bridging have been carried out on various genres and different languages (Fraurud, 1990; Poesio and Vieira, 1998; Poesio, 2004; Nissim et al., 2004; Gardent and Manuelian, 2005; Nedoluzhko et al., 2009; Eckart et al., 2012; Markert et al., 2012; Rosiger, 2018; Poesio et al., 2018).",
"Among those datasets, ISNotes (Markert et al., 2012), BASHI (Rosiger, 2018) and ARRAU (Poesio et al., 2018) are re-cent three public English corpora which contain mediumto large-sized bridging annotations and have been used to evaluate systems' performance on bridging anaphora recognition (Hou et al., 2013a; Hou, 2016; Rosiger et al., 2018), bridging anaphora resolution (Poesio et al., 2004; Lassalle and Denis, 2011; Hou et al., 2013b; Hou, 2018a), as well as full bridging resolution (Hou et al., 2014, 2018; Rosiger et al., 2018).",
"In this paper, we focus exclusively on the task of antecedent selection.",
"It is worth noting that the bridging definition in the ARRAU corpus is different from the one used in the other two datasets.",
"Rosiger et al. (2018) pointed out that ISNotes and BASHI contain referential bridging where bridging anaphors are truly anaphoric and bridging relations are context-dependent, while in ARRAU, most bridging links are purely lexical bridging pairs which are not context-dependent (e.g., Europe Spain or Tokyo Japan ).",
"In this paper, we focus on resolving referential bridging anaphors.",
"Regarding the algorithm for bridging anaphora resolution, most previous work uses the pairwise model for the task.",
"The model assumes gold or system mention information (NPs) is given beforehand.",
"It creates (positive/negative) training instances by pairing every anaphor a with its preceding mention m .",
"Usually, m is from a set of antecedent candidates which is formed using a fixed window size.",
"Poesio et al. (2004) and Lassalle and Denis (2011) trained such pairwise models to resolve mereologi-cal bridging anaphors in the English GNOME corpus 2 and the French DEDE corpus (Gardent and Manu elian, 2005), respectively.",
"One exception is Hou et al. (2013b), which proposed a joint inference framework to resolve bridging anaphors in ISNotes.",
"The framework is built upon the pairwise model and predicts all semantically related bridging anaphors in one document together.",
"Recently, Hou (2018a) generated a word representation resource for bridging (i.e., embeddings bridging ) and proposed a simple deterministic algorithm to find antecedents for bridging anaphors in ISNotes and BASHI.",
"The word representation resource is learned from a large corpus 2 The GNOME corpus is not publicly available.",
"and it captures the common-sense knowledge (i.e., semantic relatedness) between NPs.",
"Different from the algorithms mentioned above, our QA model does not require the extracted or gold mentions (NPs) as the input, and it predicts the span of the antecedent for a bridging anaphor directly.",
"Question Answering.",
"Reading comprehension or question answering based on context has at-tacted much attention within the NLP community, in particular since Rajpurkar et al. (2016) released a large-scale dataset (SQuAD) consisting of 100,000+ questions on a set of paragraphs extracted from Wikipedia articles.",
"Previous work has cast a few traditional NLP tasks as question answering, such as textual entailment (McCann et al., 2018), entityrelation extraction (Li et al., 2019), and coreference resolution (Wu et al., 2020).",
"However, unlike these tasks, we do not have large scale training datasets for bridging.",
"As a result, we form the questions for our task in a more natural way in order to leverage the existing QA datasets (e.g., SQuAD) that require common-sense reasoning.",
"In addition, we generate a large-scale training dataset of quasi-bridging and demonstrate that it is a good pre-training corpus for bridging anaphora resolution.",
"Recently, Gardner et al. (2019) argue that we should consider question answering as a format instead of a task in itself.",
"From this perspective, our work can be seen as a specific probing task to test a QA model's ability to understand bridging anaphora based on context.",
"Winograd Schema Challenge.",
"Bridging anaphora resolution shares some similarities with Winograd Schema Challenge (WSC).",
"Specifically, in both tasks, one has to understand the context to find the antecedents for anaphors.",
"However, the antecedent search space in bridging anaphora resolution is much bigger than the one in WSC.",
"This is because an anaphor (pronoun) and its antecedent in WSC are usually from the same sentence, while bridging pairs usually require cross-sentence inference.",
"For instance, in ISNotes, only around 26% of anaphors have antecedents occurring in the same sentence, and 23% of anaphors have antecedents that are more than two sentences away.",
"Recently, Kocijan et al. (2019) use some heuristics to generate a large-scale WSC-like dataset and report that the model pre-trained on this dataset achieves the best results on several WSC datasets after being fine-tuned on a small in-domain dataset.",
"We find similar patterns of results for bridging anaphora resolution (see Section 5.3).",
"In this section, we describe our QA system (called BARQA ) for bridging anaphora resolution in detail.",
"Figure 1 illustrates how BARQA predicts antecedents for bridging anaphors in Example 1.",
"We formulate bridging anaphora resolution as context-based QA problem.",
"More specifically, given a bridging anaphor a and its surrounding context c a , we rephrase a as a question q a .",
"The goal is to predict a text span s a from c a that is the antecedent of a .",
"We propose to use the span-based QA framework to extract s a .",
"In general, our BARQA system is built on top of the vanilla BERT QA framework (Devlin et al., 2019).",
"We further modify the inference algorithm to guarantee that the answer span s a should always appear before the bridging anaphor a (see Section 3.4 for more details).",
"Following Devlin et al. (2019), we present the input question q a and the context c a as a single packed sequence [ cls ] q a [ sep ] c a and calculate the probabilities of every word in c a being the start and end of the answer span.",
"The training objective is the log-likelihood of the correct start and end positions.",
"In English, the preposition of in the syntactic structure np 1 of np 2 encodes different associative relations between noun phrases that cover a variety of bridging relations.",
"For instance, the chairman of IBM indicates a professional function in an organization , and the price of the stock indicates an attribute of an object .",
"Poesio et al. (2004) also used such patterns to estimate the part-of bridging relations.",
"These patterns reflect how we explain bridging anaphora as human beings.",
"It seems that the most natural way to understand the meaning of a bridging anaphor a is to find the answer for the question a of what? from the surrounding context of a .",
"As a result, in order to generate the corresponding question q a for a bridging anaphor a , we first create a (cid:48) by removing all words appearing after the head of a , we then concatenate a (cid:48) with of what? to form the question.",
"This is because, as pointed by Hou (2018a), premodifiers of bridging anaphors are essential elements to understand bridging relations.",
"For instance, for the bridging anaphor a painstakingly documented report, based on hundreds of interviews with randomly selected refugees , the corresponding question is a painstakingly documented report of what? .",
"For each bridging anaphor a together with its corresponding question q a and context c a described above, we construct a list of answers A that contains all antecedents of a occurring in the context",
"c a .",
"3 In addition, for every NP antecedent n from A , we add the following variations which represent the main semantics of n into the answer list: the head of n (e.g., last week's earthquake ) n (cid:48) which is created by removing all postmodifiers from n (e.g., the preliminary conclusion from a survey of 200 downtown high-rises ) n (cid:48)(cid:48) which is created by removing all postmodifiers and the determiner from n (e.g., the total potential claims from the disaster ) It is worth noting that if the context c a does not contain any antecedent for the bridging anaphor a (e.g., some anaphors do not have antecedents occurring in c a if we use a small window size to construct it), we put no answer into the answer list A .",
"Different from the SQuAD-style question answering where there is no specific requirement for the position of the predicted span, in bridging anaphora resolution, an anaphor must appear after its antecedent.",
"Therefore in the inference stage, for each bridging anaphor a , we first identify the position of a in its context c a , then we only predict text spans which appear before a .",
"We further prune the list of predicted text spans by only keeping the top k span candidates that contain at most l words ( k and l are empirically set to 20 and 5, respectively).",
"We also prune span predictions that are function words (e.g., a, an, the, this, that ).",
"During the training process, we first use Span-BERT (Joshi et al., 2019) to initialize our BARQA model because it shows promising improvements on SQuAD 1.1 compared to the vanilla BERT embeddings.",
"We then continue to train our model using different pre-training and fine-tuning strategies.",
"Section 5.3 describes different training strategies in detail.",
"For every training strategy, we train BARQA for five epochs with a learning rate of 3e-5 and a batch size of 24.",
"4 During training and testing, the maximum text length is set to 128 tokens.",
"3 In ISNotes and BASHI, we use gold coreference annotations from OntoNotes (Weischedel et al., 2011) to identify all possible antecedents for every bridging anaphor.",
"4 In general, the small learning rate (i.e., 3e-5, 4e-5, and 5e-5) and small fine-tuning epochs are common practices for fine-tuning BERT models.",
"We test the combination of these Input Text In a search for new evidence of obstruction of justice by the president, Republicans seek documents concerning several figures from the campaign fund-raising scandal.",
"Bridging anaphora is a complex phenomenon, and there are no large-scale corpora available for referential bridging.",
"In this section, we describe how we generate a large scale quasi-bridging dataset.",
"Hou (2018b) explores the syntactic prepositional and possessive structures of NPs to train word embeddings for bridging.",
"Inspired by this work, we first use these structures to identify bridging anaphors and the corresponding antecedents.",
"Next, we map them back to the discourse to create bridging-like examples.",
"More specifically, given a text, we first extract NPs containing the prepositional structure (e.g., X preposition Y ) or the possessive structure (e.g., Y 's X ).",
"In order to have a high-quality set of automatically generated bridging annotations, we apply an additional constraint to the above NPs, i.e., X and Y should not contain any other NP nodes in the constituent tree.",
"For instance, we do not consider NPs such as the political value of imposing sanctions against South Africa or the cost of repairing the region's transportation system .",
"Figure 2 illustrates how we generate a bridging annotation with a sentence pair { s y , s x } from a raw text 5 : we first extract the NP obstruction of justice from the sentence s i and identify X / Y in this extracted NP (i.e., X = obstruction, Y = justice).",
"Next, we collect a list of sentences S from the parameters for various training configurations on a small set (10 documents) of the ISNotes corpus and the BASHI corpus, respectively.",
"On both corpora, we observed that a learning rate of 3e-5, 4e-5, or 5e-5 has minimal impact on results; and for each learning rate, the result continues improving at the beginning (epochs = 1,2,3,4,5), but the performances stays more or less the same after epochs > 5.",
"5 The raw text is from the Gigaword corpus (Parker et al., 2011; Napoles et al., 2012).",
"whole text.",
"Every sentence in S contains Y but does not contain X .",
"If S contains more than one sentence, we choose the one which is the closest to s i as s y .",
"This is because close sentences are more likely semantically related.",
"Finally, we generate the sentence s x by replacing obstruction of justice in the original sentence s i with the obstruction .",
"This gives us a quasi-bridging example with two adjacent sentences (i.e., s y and s x ) and a bridging link (i.e., justice the obstruction ).",
"As a result, we obtain a large amount of quasi-bridging training data (i.e., around 2.8 million bridging pairs) by applying the method described above to the NYT19 section of the automatically parsed Gigaword corpus.",
"In order to understand the quality of our quasi-bridging training dataset, we randomly sample 100 quasi-bridging sentence pairs and manually check bridging annotations in these instances.",
"We score each bridging annotation using a scale of 0-2: 2 means that the bridging annotation is correct and the sentence pair sounds natural; 1 indicates that the example makes sense, but it does not sound natural in English; and 0 denotes that the annotation is unacceptable.",
"Overall, we find that 25% of instances and 37% of instances have a score of 2 and 1, respectively.",
"And the remaining 38% of instances are scored as zero.",
"In general, our noisy quasi-bridging training dataset does contain a large number of diverse bridging pairs.",
"We use four datasets for experiments.",
"The first dataset is ISNotes 6 released by Markert et al. 6 http://www.h-its.org/en/research/nlp/ isnotes-corpus (2012).",
"This dataset contains 50 texts with 663 referential bridging NPs from the World Street Journal (WSJ) portion of the OntoNotes corpus (Weischedel et al., 2011).",
"The second dataset is called BASHI from Rosiger (2018).",
"It contains 459 bridging NPs 7 with 344 referential anaphors from 50 WSJ texts 8 .",
"Note that bridging anaphors in these two corpora are not limited to definite NPs as in previous work (Poesio et al., 1997, 2004; Lassalle and Denis, 2011) and bridging relations are not limited to the prototypical whole part relation or set element relation.",
"We consider these two corpora as expert-annotated in-domain datasets.",
"We assume that some reasoning skills (e.g., world knowledge, word relatedness) required to answer questions in SQuAD can also be applied for bridging anaphora resolution.",
"Therefore we include the SQuAD 1.1 training data (Rajpurkar et al., 2016) as one training dataset.",
"Another training dataset is the large scale quasi-bridging corpus ( QuasiBridging ) described in Section 4.",
"Table 1 summarizes the four datasets mentioned above.",
"Note that in ISNotes and BASHI, the number of QA pairs is more than the number of bridging anaphors.",
"This is because an anaphor can have multiple antecedents (e.g., coreferent mentions of the same antecedent entity).",
"Following Hou (2018a), we use accuracy on the number of bridging anaphors to measure systems' performance for resolving bridging anaphors on ISNotes and BASHI.",
"It is calculated as the number of the correctly resolved bridging anaphors divided by the total number of bridging anaphors.",
"We measure two types of accuracy : lenient accuracy and strict accuracy .",
"In strict accuracy , only the original gold antecedent annotations are counted as the correct answers.",
"For lenient accuracy , we add the additional variations of the original antecedent annotations (described in Section 3.3) into the correct answer list.",
"For instance, suppose that the gold antecedent annotation is the Four Seasons restaurant , and the predicted span is Four Seasons restaurant , we count this prediction as an incorrect prediction in strict accuracy evaluation.",
"However, it is a correct prediction in lenient accuracy evaluation.",
"7 BASHI considers comparative anaphora as bridging anaphora.",
"We exclude them from this study.",
"8 Note that these WSJ articles are different from the ones in ISNotes.",
"It is worth noting that our lenient accuracy corresponds to the exact match metric in SQuAD (Rajpurkar et al., 2016).",
"The correct answer lists that are generated as described in Section 3.3 can partially address the evaluation problem of imperfect system mention predictions.",
"We do not report F1 score because it will give partial credit for a prediction that does not capture the main semantics of the original gold annotation, such as the Four Seasons .",
"During evaluation, for every bridging anaphor a , let s a be the sentence containing a , we use the first sentence of the text, the previous two sentences of s a , as well as s a to form a 's surrounding context c a .",
"This is in line with Hou (2018a)'s antecedent candidate selection strategy.",
"In this section, we carry out experiments using our BARQA system with different training strategies.",
"For every bridging anaphor a , we choose the span with the highest confidence score from its context c a as the answer for the question q a and use this span as the predicted antecedent.",
"We report results on ISNotes and BASHI using lenient accuracy (see Table 2).",
"Looking at the results on ISNotes, we find that BARQA trained on a small number of in-domain dataset ( BASHI ) achieves an accuracy of 38.16% on ISNotes, which is better than the model trained on the other two large-scale datasets ( SQuAD 1.1 and QuasiBridging ).",
"However, when using these two datasets to pre-train the model then fine-tuning it with the small in-domain dataset ( BASHI ), both settings (i.e., SQuAD 1.1 + BASHI and QuasiBridging + BASHI ) achieve better results compared to using BASHI as the only training dataset.",
"This verifies the value of the pre-training + fine-tuning strategy, i.e., pre-training the model with large scale out-of-domain or noisy dataset, then fine-tuning it with a small in-domain dataset.",
"Particularly, we notice that the performance of using QuasiBridging alone is worse than the one using SQuAD 1.1 only.",
"However, combining QuasiBridging and BASHI achieves the best result on ISNotes, with an accuracy of 47.21%.",
"It seems that the large-scale in-domain noisy training data ( QuasiBridging ) brings more value than the large-scale out-of-domain training data ( SQuAD 1.1 ).",
"We observe similar patterns on the results on Corpus Genre Bridging Type # of Anaphors # QA Pairs ISNotes WSJ news articles referential bridging 663 1,115 BASHI WSJ news articles referential bridging 344 486 SQuAD 1.1 (train) Wikipedia paragraphs -87,599 QuasiBridging NYT news articles quasi bridging 2,870,274 2,870,274 Table 1: Four datasets used for experiments.",
"BASHI.",
"Pre-training the model on QuasiBridging then fine-tuning it on ISNotes achieves the best result with an accuracy of 37.79%.",
"Furthermore, when evaluating on BASHI, it seems that using SQuAD 1.1 as the pre-training dataset does not bring additional values when combining it with ISNotes .",
"Previous work for bridging anaphora resolution on ISNotes and BASHI use gold/system mentions as antecedent candidates and report results using strict accuracy (Hou et al., 2013b; Hou, 2018a).",
"In order to fairly compare against these systems, for every bridging anaphor a , we first map all top 20 span predictions of our system BARQA to the gold/system mentions, then we choose the gold/system mention with the highest confidence score as the predicted antecedent.",
"Specifically, we map a predicted span s to a mention m if they share the same head and s is part of m (cid:48) ( m (cid:48) is created by removing all postmodifiers from m ).",
"For instance, total potential claims is mapped to the mention the total potential claims from the disaster .",
"If a predicted span can not be mapped to any gold/system mentions, we filter it out.",
"Following Hou (2018a), we only keep the predictions whose semantic types are time if a is a time expression.",
"The above process is equal to using gold/system mentions and their semantic information to further prune BARQA 's span predictions.",
"Table 3 and Table 4 compare the results of our system BARQA against previous studies for bridging anaphora resolution on ISNotes and BASHI, respectively.",
"For both datasets, the BARQA model is trained using the best strategy reported in Table 2 (pre-training with QuasiBridging + fine-tuning with small in-domain data).",
"On ISNotes, previously Hou (2018a) reported the best result by adding the prediction from a deterministic algorithm ( embeddings bridging (NP head + modifiers) ) as an additional feature into the global inference model ( MLN II ) proposed by Hou et al. (2013b).",
"The deterministic algorithm is based on word embeddings for bridging and models the meaning of an NP based on its head noun and modifications.",
"Our system BARQA , when using the gold mentions together with their semantic information to further prune the span predictions, achieves the new state-of-the-art result on ISNotes, with a strict accuracy of 50.08% (see BARQA with gold men-tions/semantics, strict accuracy in Table 3).",
"How-System Use Gold Mentions Accuracy Models from Hou et al. (2013b) pairwise model III yes 36.35 MLN model II yes 41.32 Models from Hou (2018a) embeddings bridging (NP head + modifiers) yes 39.52 MLN model II + embeddings bridging (NP head + modifiers) yes 46.46 This work BARQA with gold mentions/semantics, strict accuracy yes 50.08 BARQA without mention information, strict accuracy no 36.05 BARQA without mention information, lenient accuracy no 47.21 Table 3: Results of different systems for bridging anaphora resolution in ISNotes.",
"ever, we argue that using gold mention information to construct the set of antecedent candidates is a controlled experiment condition, and our experiment setup BARQA without mention information, lenient accuracy is a more realistic scenario in practice.",
"On BASHI, Hou (2018a) reported an accuracy of 29.94% ( strict accuracy ) using automatically extracted mentions from the gold syntactic tree annotations.",
"Our system BARQA without any men-tion/semantic information achieves an accuracy of 32.27% using the same strict accuracy evaluation.",
"The result of BARQA is further improved with an accuracy of 38.66% when we integrate mention/semantic information into the model.",
"Note that Hou (2018a) also adapted their deterministic algorithm to resolve lexical bridging anaphors on ARRAU (Poesio et al., 2018) and reported an accuracy of 32.39% on the RST Test dataset.",
"Although in this paper we do not focus on lexical bridging, our model BARQA can also be applied to resolve lexical bridging anaphors.",
"We found that BARQA trained on the RST Train dataset alone with around 2,000 QA pairs achieves an accuracy of 34.59% on the RST Test dataset.",
"In order to better understand our model, we automatically label bridging anaphors in ISNotes as either referential bridging/world-knowledge or referential bridging/context-dependent .",
"We then analyze the performance of BARQA and the best model from Hou (2018a) on these two categories.",
"Rosiger et al. (2018) pointed out that although lexical and referential bridging are two different concepts, sometimes they can co-occur within the same pair of expressions.",
"In Example 2, Employees is an anaphoric expression.",
"At the same time, the relation between the antecedent entity { Mobil Corp./the company's } and the bridging anaphor Employees corresponds to the common-sense world knowledge which is true without any specific context.",
"We call such cases as referential bridging/world-knowledge .",
"Differently, we call a bridging anaphor as referential bridging/context-dependent if it has multiple equally plausible antecedent candidates according to the common-sense world knowledge about the NP pairs and we have to analyze the context to choose the antecedent (see Example 1).",
"One may # pairs BARQA MLN II + emb Know.",
"argue that { the exploration and production divi-sion Employees } in Example 2 is also a valid common-sense knowledge fact, however, we consider that it is less prominent than { the company's Employees } .",
"(2) Mobil Corp. is preparing to slash the size of its workforce in the U.S., possibly as soon as next month, say individuals familiar with the company's strategy.",
"The size of the cuts isn't known, but they'll be centered in the exploration and production division, which is responsible for locating oil reserves, drilling wells and pumping crude oil and natural gas.",
"Employees haven't yet been notified.",
"For a bridging anaphor a , the deterministic algorithm ( embeddings bridging ) from Hou (2018a) uses a word representation resource learned from a large corpus to predict the most semantically related NP among all NP candidates as the antecedent.",
"The predictions from this system reflect the common-sense world knowledge about the NP pairs.",
"We thus use this algorithm to label bridging anaphors in ISNotes: if a bridging anaphor is correctly resolved by embeddings bridging , we label it as referential bridging/world-knowledge , otherwise the label is referential bridging/context-dependent .",
"Table 5 compares the percentage of correctly resolved anaphors between BARQA with gold mentions and the best model from Hou (2018a) ( MLNII + emb ) on the two bridging categories.",
"Note that MLN II + emb contains several context-level features (e.g., document span, verb pattern).",
"Overall, it seems that our BARQA model is better at resolving context-dependent bridging anaphors.",
"In this paper, we model bridging anaphora resolution as a question answering problem and propose a QA system ( BARQA ) to solve the task.",
"We also propose a new method to automatically generate a large scale of quasi-bridging training data.",
"We show that our QA system, when trained on this quasi-bridging training dataset and fine-tuned on a small amount of in-domain dataset, achieves the new state-of-the-art results on two bridging corpora.",
"Compared to previous systems, our model is simple and more realistic in practice: it does not require any gold annotations to construct the list of antecedent candidates.",
"Moreover, under the proposed QA formulation, our model can be easily strengthened by adding other span-based text understanding QA corpora as pre-training datasets.",
"Finally, we will release our experimental QA datasets (in the SQuAD json format) for bridging anaphora resolution on ISNotes and BASHI.",
"They can be used to test a QA model's ability to understand a text in terms of bridging inference.",
"The author appreciates the valuable feedback from the anonymous reviewers."
] | [
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"other"
] |
[
"A false contract is more likely to be rejected than a contract is, yet a false key is less likely than a key to open doors.",
"While correctly interpreting and assessing the effects of such adjective-noun pairs (e.g., false key ) on the plausibility of given events (e.g., opening doors ) underpins many natural language understanding tasks, doing so often requires a significant degree of world knowledge and common-sense reasoning.",
"We introduce ADEPT a large-scale semantic plausibility task consisting of over 16 thousand sentences that are paired with slightly modified versions obtained by adding an adjective to a noun.",
"Overall, we find that while the task appears easier for human judges (85% accuracy), it proves more difficult for transformer-based models like RoBERTa (71% accuracy).",
"Our experiments also show that neither the adjective itself nor its taxonomic class suffice in determining the correct plausibility judgement, emphasizing the importance of endowing automatic natural language understanding systems with more context sensitivity and commonsense reasoning.",
"Discerning the varying effects of adjectival modifiers on the reading of a sentence is critical in a variety of tasks involving natural language understanding.",
"Consider the following examples: (1)",
"a. A [dead] monkey turns on a light switch.",
"b. A [dead] leg has one foot.",
"c. A [dead] leaf falls from a tree in autumn.",
"The reading of these sentences with and without the modifier dead is notably different.",
"The plausibility judgement of the event where a monkey turns on a light switch decreases when the adjectival modifier dead is added, while in the 1b or 1c examples, adding the same modifier leads to no change or an increase in event plausibility, respectively.",
"This observation has important ramifications for many NLP applications like information extraction (IE) and recognizing textual entailment (RTE), where solutions have often relied on normative rules that group the effects of adjectives according to either the adjective or its taxonomic class (Mc-Nally and Boleda, 2004; Amoia and Gardent, 2007; McCrae et al., 2014).",
"These taxonomies distinguish adjectives like false, dead, alleged (non-subsective) from others like red, large, or valid (subsective).",
"Specifically, while the 1a example may influ-ence systems to adopt the rule that adding a non-subsective adjective like dead to a noun leads to a decrease in plausibility, the other examples suggest a conflicting rule.",
"Distinguishing the effects of different adjectives (beyond just their denotation) may thus require common-sense and world knowledge.",
"Powerful, massively pre-trained language models (LMs) have pushed the performance on various natural language understanding benchmarks to impressive figures; transformer architectures including BERT and RoBERTa are believed to perform at near human-level performance on a number of Natural Language Inference (NLI) tasks (Liu et al., 2019), while the recently proposed DeBERTa, which builds upon the former two architectures, performs at state-of-the-art on MNLI, RTE, QNLI and WNLI (He et al., 2020).",
"It is however unclear whether the complex effects of the classes of modifiers exampled above are captured by the competing models given their sparsity in both the corpora and existing NLI benchmarks.",
"To examine the ability of LMs to capture and distinguish the effects of adjectives on events plausibility, we present a challenge task formulated as a plausibility classification problem consisting of sentence pairs with and without inserting possible adjectives.",
"We do so to understand the strengths and weaknesses of LMs that have led to state-of-the-art performance in downstream NLI-tasks.",
"TaA DEPT instance Inserted Modifier (Taxonomic Class) Plausibility Change (1a): A [false] key opens doors.",
"We introduce a novel plausibility task: Using automated mechanisms to extract, filter and construct natural sentences, we create ADEPT a large human-labeled semantic plausibility task consisting of 16 thousand pairs of sentences that differ only by one adjective added to a noun, and designed to resist the statistical correlations that might underpin modern distributional lexical semantics.",
"1 We show that transformer-based models are not yet adept at ADEPT : Our findings suggest performance gaps between humans and large language representation models on ADEPT , which appears to be in large part due to the models' insensitivity to context, indicating an important area for their improvement.",
"on event plausibility is context dependent: We quantify the degree to which plausibility judgements vary for the same adjective and taxonomic class, finding that rules based only on the adjective or its denotation are insufficient when assessing the plausibility readings of events.",
"For example, in our task, the non-subsective adjective like dead led to a decrease in events plausibility as frequently as it led to no change at all.",
"Building on prior work showing that normative rules are often broken for subsective adjectives (Pavlick and Callison-Burch, 2016), we inves-1 The corpus and the code to reproduce all of our experimental results are available at https://github.com/aemami1/ADEPT.",
"tigate possible effects across all types of adjectives, beyond just the taxonomical categories.",
"The scope of our analysis also goes beyond entailment effects, examining the effects on plausibility , which can be seen as both complimentary and even an extension to entailment tasks.",
"Taxonomy of adjectives: The taxonomic classification of adjectives into subsective and non-subsective categories originates from the works of Parsons (1970), Montague (1970), Clark (1970) & Kamp and Keenan (1975).",
"Canonically, subsective adjectives modify a noun such that the extension of the adjective-noun pair is a subset of the extension of the noun alone (e.g., a blue fish is still a fish and a loose tooth is still a tooth).",
"In contrast, non-subsective adjectives modify a noun such that the extension of the adjective-noun pair is not a subset of the noun's extension (e.g., a former president is not a president or an alleged criminal is not necessarily a criminal).",
"Kamp and Partee (1995) further divided non-subsective adjectives in two categories: privative and plain .",
"While when combined with nouns privative adjectives produce a disjoint set of entities from the original noun (e.g., former president does not fall under the class of presidents, making former a privative adjective), plain non-subsective adjectives do not guarantee this mutual exclusiveness (e.g., an alleged criminal may or may not be a criminal ).",
"This classification scheme has been adopted for many NLP applications including IE and RTE (Amoia and Gardent, 2006, 2007; McCrae et al., 2014).",
"For RTE, inference rules were developed according to whether the adjective was non-subsective or not.",
"For IE, non-subsective adjectives were treated as special cases for extracting open IE relations (Angeli et al., 2015).",
"We show that there is also a relation between an adjective and the plausibility of the rest of the clause, even for subsective adjectives.",
"This has direct implications for the extraction of generalizable abstract knowledge that can be extracted from a corpus.",
"Aspects of this classification scheme have since been challenged, resulting in efforts to either expand on its definitions or abandon the taxonomy altogether.",
"Del Pinal (2015) suggests that the meaning of certain nouns are only partially modified by non-subsective adjectives (e.g., only the functional features are modified), while Nayak et al. (2014) tackle the categorization problem with a statistical approach focused on the proportion of properties shared by the noun and the adjective noun pair.",
"Even more recently, inference rules relying on the original taxonomy were observed not to be without exceptions; Pavlick and Callison-Burch (2016) used human annotators to highlight cases where the deletion of non-subsective adjectives from a sentence does not necessarily result in non-entailment.",
"These on-going examinations and revisions underpin a profound linguistic phenomenon of mutual dependence: while adjectives play a crucial role in the correct interpretation of a sentence context, the context words are just as instrumental in determining the effect of an adjective; resulting in a number of exceptions to taxonomically-based rules.",
"Inspired by this, our work explores the broader question of how dependent the effect of any adjective (beyond their taxonomical class) is on the interpretation of a sentence.",
"For this, we frame our exploration in terms of changes in the plausibility of events, which we believe it can be seen as an extension to entailment.",
"Recognizing Textual Entailment & Semantic Plausibility: The RTE Challenges were yearly sources of textual inference examples (Dagan et al., 2006) consisting of a three-way classification task with the inputs as sentence pairs { T , H } with labels for entailment , contradiction or unknown (meaning T neither contradicts nor entails H ).",
"Variations of this task are also described in SNLI (Bow-man et al., 2015) and MNLI (Williams et al., 2018).",
"The Johns Hopkins Ordinal Commonsense Inference (JOCI) task generalizes RTE to the problem of determining relative change in semantic plausibility on an ordinal 5-level Likert scale (from impossible to very likely) (Zhang et al., 2017).",
"Other semantic plausibility datasets have collected judgments for the plausibility of single events (Wang et al., 2018b) and the plausibility of adjectives modifying a meronym (Mullenbach et al., 2019).",
"Such plausibility tasks have often been solved using either data-driven methods (Huang and Luo, 2017; Sasaki et al., 2017) or pre-trained LMs (Radford et al., 2019).",
"Prior work has also collected human assessments of the plausibility of adjective-noun pairs (Lapata et al., 1999; Keller and Lapata, 2003; Zhang et al., 2019); however, this line of work specifically focuses on the plausibility of bi-grams without context, known as selectional preference.",
"We develop ADEPT , a semantic plausibility task that features over 16 thousand instances consisting of two sentences, where the second sentence differs from the first only by the inclusion of an adjectival modifier.",
"Examples of these instances are in Table 1, where the inserted modifier is bracketed.",
"Formally, given the original sentence s and the modified sentence s (cid:48) , s (cid:48) is identical to s except for the addition of an adjective a before the root noun of the original sentence.",
"The task is to assess the plausibility difference in the reading of s (cid:48) versus that of s .",
"The possible plausibility ratings are:",
"1. Impossible s (cid:48) is improbable or illogical.",
"2. Less likely s (cid:48) is less likely than s .",
"3. Equally likely s (cid:48) is as plausible as s is.",
"4. More likely s (cid:48) is more likely than s .",
"5. Necessarily true s (cid:48) is true by necessity, including repetitive use of phrases or words that have similar meanings.",
"To construct ADEPT , we scrape text samples from English Wikipedia and Common Crawl, extracting adjectival modifier-noun pairs that occur with high frequency.",
"We then curated these pairs through a multi-stage pipeline to filter out extraction errors, typos, and inappropriate words, as well as oversample non-subsective adjectives which tend to be in the long-tail of a given corpora.",
"We then use existing knowledge bases to find relevant predicates for the noun in the adjective-noun pair and compose natural sentences based on them.",
"To an-Noun-amodExtraction: Clean up raw text, and use syntactic parsing to extract noun-adjectivalmodifierpairs.",
"notate the data, we provide human annotators with labelling instructions, while implementing quality control measures as exclusion criteria for final dataset instances.",
"We now detail the steps of our data collection process (see Figure 1 for an overview).",
"Tables 2 and 3 provide examples of how each step contributes to the creation of an ADEPT instance.",
"Noun-amod extraction: In order to extract adjectival modifier and noun pairs, we use two dependency-parsed corpora: English Wikipedia, which we parse using the Stanza pipeline (Qi et al., 2020), and a subset of DepCC (Panchenko et al., 2018), an automatic parse of the Common Crawl corpus.",
"After a preliminary examination of the modifier-noun pairs' quality, we kept only those pairs that occur at least 10 times in their respective corpus.",
"This filtered out many pairs that appeared anomalous or atypical (e.g., unwieldy potato ).",
"We extracted 10 million pairs from English Wikipedia and 70 million pairs from Common Crawl.",
"Noun filtering: Using these pairs, we created dictionary items consisting of nounsthat co-occur with at least four different adjectival modifiersalong with their adjectival modifiers.",
"This threshhold (as opposed to a higher one) allows us to both still find rare non-subsective adjectives to oversample at later steps, and avoid excessively reducing the number of extracted pairs.",
"2 We then filter out adjectives and nouns that e.g., are explicit, have offensive connotations using preset lists and automatic moderation tools (e.g., profanity-filter (Roman Inflianskas, 2020)).",
"Finally, we ensured that both the nouns and adjectives are valid En-2 Preliminary analyses showed that non-subsective adjectives represent less than 5% of our entries.",
"glish words.",
"This yielded slightly over 50 thousand noun-adjective dictionary items.",
"Predicate extraction: For the noun in each dictionary item, we use ConceptNet 5 (Liu and Singh, 2004) to find predicates under the relationships of IsCapableOf , HasProperty , ReceivesAction , HasA , and UsedFor .",
"We restricted the predicates to these as they best characterize the functional features of a noun, which earlier studies found to be most sensitive to change according to the attaching modifier (Del Pinal, 2015).",
"We also store the surface textthe sentence the ConceptNet annotator wrote to examplify a use of the predicate with the noun (e.g., for the noun book, under the ConceptNet relation IsCapableOf , the predicate include a table of contents is found with the surface text: A book can include a table of contents ).",
"Predicate Filtering + Scaling: After applying additional filtering to the surface texts and predicates (to remove explicit or ungrammatical pred-icates), we create triples containing a noun, a set of adjectival modifiers, and the predicates.",
"This yielded over 7,000 triples.",
"Given that an entry may contain more than one retrieved predicate for its noun, we scaled the dictionary to allow for duplicate nouns with different predicates (up to three predicates).",
"3 This yielded over 20,000 entries.",
"Sentence construction: For each of these adjective-noun-predicate entries, we generate natural sentences and four corresponding variants.",
"The original sentence ( s in Section 3) is composed only from the noun and the predicate, while the four variants ( s (cid:48) in Section 3) are modified versions of the original sentence created by adding the adjective before the root noun in the original sentence 3 Threshold selected to correspond to the average number of different predicates extracted for each noun, and avoid scale the dictionary excessively at the cost of dataset diversity.",
"(see Table 2 for examples).",
"To create the natural sentences themselves, we modify the surface text by replacing modal verbs (like can or may ) with the declarative is , as modal verbs may complicate the evaluation of what the plausibility of described events might be.",
"Adjective Sampling: To identify non-subsective adjectives in the dataset entries, we use a set of 60 non-subsective adjectives identified by Nayak et al. (2014).",
"Then, to select four adjectives we first 1) randomly select up to two non-subsective modifiers if they co-occured with the noun, and then 2) we randomly select the remaining adjectives from the list of subsective modifiers.",
"We over-sample non-subsective modifiers as they occur sparsely in the corpora and we want to evaluate their effects against other modifiers.",
"This random sampling strategy results in an about 1:4 non-subsective to subsective adjective ratio (as some entries have no non-subsective adjectives), allowing us to analyze the effect of non-subsective modifiers while maintaining an element of randomness.",
"randomly selected sentence variant (from the four variants) against its original sentence, with labels indicating the change in plausibility due to adding the selected adjective (Table 3).",
"For quality control, we also add roughly 2,000 quality-check entries including gold label instances for which there was unanimous agreement among four annotators in earlier pilots and attention-check instances that explicitly ask annotators to select a specific label.",
"We filter out all instances annotated by annotators who failed the attention checks or whose labels differed by at least two degrees from the gold labels (e.g., selected equally likely when the gold label was impossible ) on more than 10% of their annotations.",
"We also limit the maximum number of labelling tasks per annotator to 100 (corresponding to less than 0.5% of the data) to ensure that no one judge significantly affects the quality of the data.",
"Finally, we only keep those instances for which we observe a majority agreement (i.e., at least two annotators agree about the final label).",
"After this final quality-control filtering steps, the final dataset includes 16,115 instances.",
"Table 4 overviews the dataset figures, highlighting the labels' distribution and agreement.",
"By inspecting how often judges agree across our plausibility labels, we observe higher assessment variability for instances with labels further from equally likely (also the most commonly applied label).",
"This is particularly true for instances with labels at the extremes of our plausibility scale (i.e., the impossible and necessarily true labels).",
"While 40% of the dataset instances marked as equally likely have unanimous annotator agreement, this is the case for only 21% of the instances marked as impossible .",
"We found no agreement across the 5 plausibility labels ( 3) for about 15% of the annotated instances, which we do not include in the final dataset.",
"While how much judges agree on labels varies across plausibility levels, the directionality of the assigned labels is more stablei.e., many disagreements are due to judges making different but consistent assessments like more likely and necessarily true , rather than conflicting assessments like less and more likely .",
"Because of this, we also experiment with alternative 3-Class and 4-Class task formulations ( 5.3), where the impossible and necessarily true labels are either combined with other labels or are discarded.",
"Task Ambiguity Closely inspecting instances marked as impossible and necessarily true to understand possible sources of disagreement among judges, we find that only about a quarter (for impossible ) to a third (for necessarily true ) of these instances appear to be clear cases where both 1) adding the adjective led to a change in plausibility and 2) the change in plausibility made the event impossible or necessarily true .",
"Sometimes the described events are already impossible or necessarily true (e.g., average in an [average] week is made up of seven days does not change the plausibility of this statement, which was already necessarily true ).",
"In other cases, the added modifier changes the semantic interpretation of the event (e.g., the modifier algebraic makes the event an [algebraic] operator pages a doctor impossible because it alters the sense of the term operator ), or it introduces grammatical or logical errors (e.g., [former] sleeping is for maintaining sanity was likely marked as impossible for being illogical).",
"There are also clear cases of false positives, where the resulting events are not impossible or necessarily true (e.g., [romantic] Jasmine buys her dress at the store is not impossible ).",
"These issues were particularly prevalent among instances annotated as impossible , where about half of the instances appear to be false positives, ungrammatical, or nonsensical sentences.",
"We therefore also experiment with a 4-Class formulation that does not include the impossible label ( 5.3).",
"Task Reliability Given the subjective and ambiguous nature of our task, we also sought to characterize to what extent the overall reliability of our labels might be affected by it.",
"For this, two authors independently labelled 100 randomly sampled instances from ADEPT , using the same annotation specifications provided to the crowdsourcing judges.",
"We then measured the inter-assessor agreement between the two authors Cohen's Kappa = 0 .",
"82 , which indicates substantial agreement.",
"We then take the instances where both authors agreed ( 87% ) and compare their labels with those provided by the crowd-workers, obtaining a = 0 .",
"74 that while lower is still substantial.",
"Finally, the individual agreement of each of the authors' labels with crowdsourcing judges (which includes cases where authors disagree) corresponded to = 0 .",
"77 and = 0 .",
"64 , further demonstrating the overall reliability of the labels we collected.",
"We evaluate several transformer-based models on ADEPT .",
"For fine-tuning, we adopt the standard practice for sentence-pair tasks described by Devlin et al. (2015).",
"We concatenate the first and second sentence with [SEP] , prepend the sequence with [CLS] , and feed the input to the transformer model.",
"The representation for [CLS] is fed into a softmax layer for a five-way classification.",
"BERT (Devlin et al., 2015) is one of the first transformer-based architectures, featuring a pre-trained neural language model with bidirectional paths and sentence representations in consecutive hidden layers.",
"RoBERTa (Liu et al., 2019) is an improved variant of BERT that adds more training data with larger batch sizes and longer training, as well as other refinements like dynamic masking.",
"RoBERTa performs consistently better than BERT across many benchmarks (Wang et al., 2018a).",
"DeBERTa builds on RoBERTa with disentangled attention and enhanced mask decoder training with half the data used in RoBERTa; currently the best-performing transformer-based model on several NLI-related tasks (He et al., 2020).",
"Normative Rule This heuristic corresponds to the normative treatment of non-subsective modifiers according to the taxonomy described in Section 2, where the general expectation is that the insertion of a non-subsective adjective would reduce the plausibility of the modified sentence.",
"Thus, when the inserted adjective in s (cid:48) is among the list of non-subsective modifiers, this baseline predicts less likely , otherwise it predicts the majority label, which is equally likely .",
"No Context Baseline We run a word association baseline to evaluate to what extent context is needed to solve the dataset.",
"In this baseline, the transformer model is provided only the noun from s (cid:48) as the representation for the original sentence, and the modifier a as the representation for the the modified sentence s separated by [SEP] (e.g., for sentence 1a in the introduction, this corresponds to the input: monkey [SEP] dead ).",
"This is analogous to the hypothesis-only baseline in NLI (Belinkov et al., 2019), where the task does not require the full context to achieve high performance.",
"Human Evaluation To estimate human performance on our task, a new annotator (not an author) independently assessed a random sample of 100 validation instances from ADEPT .",
"The annotator then evaluated each sentence using the same instructions provided to the crowdsourcing judges, whose majority agreement determined the final label.",
"The human performance thus corresponds to the percentage of instances for which the new anno-tator's labels agree with the ADEPT labels.",
"We also estimate human performance under a no context setting, where we presented this same annotator (who was now well-acquainted with the task) with a new random sample with only the noun and the modifier.",
"The annotator then made their best guess as to what the plausibility difference was without knowing the context.",
"We ensured the new instances were distinct from those in the first random sample.",
"We primarily test the baselines and transformer models using two metrics.",
"The first metric corresponds to the prediction accuracy on the full five-label classification task (5-Class Accuracy).",
"As an alternative metricdrawing from our observations in Section 4.2we use the accuracy on a three-label classification task (3-Class Accuracy), where we bundle impossible and less likely into a single label representing a decrease in plausibility , and necessarily true and more likely into a label representing an increase in plausibility .",
"All models are implemented and trained using Hug-gingFace's Transformers library (Wolf et al., 2020).",
"We use grid-search for hyper-parameter tuning: learning rate { 1e-5, 3e-5, 5e-5 } , number of epochs { 3, 4, 5, 8 } , batch-size { 8, 16, 32 } with three different random seeds.",
"For fine-tuning, we allow for the fine-tuning of all parameters including those in the model's hidden layers.",
"3 6 Results Easy for Humans, Difficult for Transformers: Model prediction accuracy is summarized in Table 5, where the general trend is as follows: the transformer-based models have a higher prediction accuracy than the majority prediction and normative rule baselines, but still fall short of human performance by a large margin.",
"Of the transformer models, the highest 3-class accuracy is achieved by DeBERTa and the highest 5-class accuracy by RoBERTa; however, the difference in accuracy of all transformer models is small (and not statistically significant p-value > 0.05), 3 We also evaluated models where we froze the parameters of all the hidden layers as a probing mechanism, but found that no model performed better than the majority baseline.",
"In the no-context ablations where models only see the noun phrase and modifier, the transformer models performance decreases only slightly, which suggests the models might be insensitive to context.",
"In contrast, approximated human performance decreases significantly in the no-context setting, dropping e.g., from 90% to 75% accuracy for 3-class predictions.",
"This no-context human accuracy, however, is still superior to the best performing transformer model with context.",
"To understand what errors the models make, we examine the confusion matrices for the best performing models on both the 3-class (Figure 2) and 5-class formulations (Figure 3).",
"The most common errors appear to happen when a change in plausibility is erroneously classified as equally likely , and when a modifier that does not change an event's plausibility is erroneously predicted to render the new sentence as less likely .",
"Table 6 includes example sentences along with 5-Class predictions by the best performing transformer model.",
"The Taxonomic Classes Just Don't Cut it: Figure 4 shows the distribution of plausibility labels in ADEPT for both subsective and non-subsective modifiers.",
"We see that both classes of modifiers lead to a wide mix of changes in the plausibility of given events, corroborating Pavlick and Callison-Burch (2016)'s findings that normative rules cannot categorically describe a modifier's behavior.",
"This likely also explains the poor performance of the normative rule baseline on both the 5or 3-class plausibility classification task formulations.",
"ambiguous and harder to reliably assign to our dataset instances, particularly at the extremes of our plausibility scale ( 4.2).",
"Given that for the impossible label many instances did not appear to correctly capture changes in plausibility that render the modified event impossible , we conduct exploratory experiments with a 4-Class task formulation that excludes the impossible class.",
"For the best performing model (RoBERTa), we observe an overall improved accuracy from 70.8% to 81.2% (compared to the 5-Class classification task).",
"Better plausibility classification schemes and crowdsourcing protocols might help us more effectively operationalize plausibility changes.",
"However, how to effectively separate between 1) cases where the modifiers alter the semantic interpretation of a statement (and thus lead to a different event) or make the sentences ungrammatical versus 2) cases where modifiers actually lead to changes in the plausibility of the original event, remains an open question.",
"We present a new large-scale corpus and task, ADEPT , for assessing semantic plausibility.",
"Our corpus contains over 16 thousand difficult task instances, specifically constructed to test a sys-tem's ability to correctly interpret and reason about adjective-noun pairs within a given context.",
"Our experiments suggest a persistent performance gap between human annotators and large language representation models, with the later exhibiting a lower sensitivity to context.",
"Finally, our task provides deeper insight into the effects of various classes of adjectives on event plausibility, and suggests that rules based solely on the adjective or its denotation do not suffice in determining the correct plausibility readings of events.",
"In the future, we wish to investigate how ADEPT could be used to improve performance on related natural language inference tasks (e.g. MNLI, SNLI & SciTail (Khot et al., 2017)).",
"We also plan to develop new models on ADEPT and transfer them to other semantic plausibility tasks.",
"This work was supported by the Natural Sciences and Engineering Research Council of Canada and by Microsoft Research.",
"Jackie Chi Kit Cheung is supported by the Canada CIFAR AI Chair program, and is also a consulting researcher for Microsoft Research.",
"While our focus on examining what effects adjectives have on the plausibility of arbitrary events makes ascertaining the broader impact of our work challenging, this work is not void of possible adverse social impacts or unintended consequences.",
"First, to generate our dataset of events, we use English Wikipedia, Common Crawl, and Concept-Net5 (based on data from e.g., Games with a Purpose or DBPedia).",
"Such data sources are however known to exhibit a range of biases (Olteanu et al., 2019; Baeza-Yates, 2018)which LMs reproduce (Solaiman et al., 2019)being often unclear what and whose content they represent.",
"While our goal is to enable others to explore the effects of modifiers and how these effect might impact various inference tasks, users of this dataset should acknowledge possible biases and should not use it to make deployment decisions or rule out failures.",
"To this end, our dataset release will be accompanied by a datasheet (Gebru et al., 2018).",
"Depending on the context, determining changes in plausibility can also be ambiguous or even subjective (see 4.2).",
"This means that in some downstream applications, possible plausibility inference errors might, for instance, inadvertently elevate factually incorrect, subjective or misleading beliefs.",
"If those inference errors happen more when events concern certain groups or activities, they might have disparate effects across stakeholders.",
"Thus, understanding the potential impact of our plausibility inference task requires us to think about both downstream applications and possible stakehold-ers (Boyarskaya et al., 2020).",
"For instance, one application of plausibility inferences is perhaps veracity or credibility assessment.",
"It would be problematic if a system would reproduce highly harmful stereotypes by inferring that a black witness is less likely to be trustworthy than just a witness , or that an old applicant is less likely to be a productive employee than just an applicant .",
"Another application (we also used as a motivating example) is information extraction where perhaps such plausibility inferences could be used to infer which details to keep during extraction.",
"Errors might for instance harmfully reinforce the belief that the prototypical human is male (Menegatti and Rubini, 2017), if female is deemed as more likely to change the plausibility of events about e.g., doctors, scientists, or other professionals; and thus deemed a relevant (or not) detail to surface based on it."
] | [
"abstain",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Recent neural text generation models have shown significant improvement in generating descriptive text from structured data such as table formats.",
"One of the remaining important challenges is generating more analytical descriptions that can be inferred from facts in a data source.",
"The use of a template-based generator and a pointer-generator is among the potential alternatives for table-to-text generators.",
"In this paper, we propose a framework consisting of a pre-trained model and a copy mechanism.",
"The pre-trained models are fine-tuned to produce fluent text that is enriched with numerical reasoning.",
"However, it still lacks fidelity to the table contents.",
"The copy mechanism is incorporated in the fine-tuning step by using general placeholders to avoid producing hallucinated phrases that are not supported by a table while preserving high fluency.",
"In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.",
"Recent data-to-text generation studies have shown significant improvement in generating faithful text aligned with data sources.",
"A copy mechanism has been widely explored to improve faithfulness in various ways.",
"Wiseman et al. (2017) used joint probabilities to let models choose between copying records from data sources or generating from a vocabulary.",
"Puduppully et al. (2019) improved a similar approach by modeling entity representations as a unit of copying.",
"This approach has proven to be effective in generating descriptive text that explicitly mentions facts from sources.",
"However, as introduced by Chen et al. (2020a), humans have the ability to produce more analyti-Model Precision Recall F1 Our full model 89.6 82.2 85.7 Lee et al. (2018) 86.2 83.7 84.9 Table 2: The overall mention detection results on the test set of OntoNotes.",
"cal text with richer inference, including numerical reasoning.",
"Making inferences beyond texts is still an open question due to the limitation of language models in handling numeric operations.",
"In this study, we further encourage research by elaborating numerical tables to initialize the ability to inject reasoning while maintaining high fluency.",
"Our contributions are summarized as follows.",
"We introduce a new dataset for table-to-text generation focusing on numerical reasoning.",
"The dataset consists of textual descriptions of numerical tables from scientific papers.",
"Our dataset is publicly available on https://github.com/titech-nlp/numeric-nlg.",
"We adopt template-guided text generation (Kale and Rastogi, 2020a) for a table-to-text generation task and propose injecting preexecuted numerical operations in the template to guide numerical-reasoning-based text generation.",
"We compare different types of templates for table representations in pre-trained models.",
"We propose a copy mechanism for pre-trained models, that uses general placeholders covering table contents and results of pre-executed numerical operations to avoid fact hallucination.",
"We conduct experiments with current state-of-the-art neural generation models and a simple template-based system to demonstrate the challenges and opportunities for future research on text generation with numerical reasoning.",
"The power of tables in presenting data efficiently further encourages research done by exploring the tables as data sources in natural language tasks, such as table-to-text generation (Liang et al., 2009; Wiseman et al., 2017; Lebret et al., 2016; Parikh et al., 2020), table question answering (Pasupat and Liang, 2015; Wang et al., 2018), and table-based fact verification (Chen et al., 2020b; Gupta et al., 2020).",
"Recent research on the table-to-text generation task is starting to generate text with more reasoning.",
"Murakami et al. (2017) explored stock prices to generate market comments by adding generalization tags of possible arithmetic operations to cover mathematical reasoning.",
"Nie et al. (2018) proposed operation-guided attentions by exploring the results of pre-executed numerical operations.",
"The dataset closest to ours is LOGICNLG, by Chen et al. (2020a), who first introduced logical text generation using open-domain tables with unknown schemas.",
"Different from our target text for generation, which consists of several sentences in a paragraph, they proposed a task of generating only one sentence from selected table contents.",
"We created numericNLG, a new table-to-text dataset focusing on a text generation task with numerical reasoning.",
"We collected table descriptions from scientific papers, that are naturally produced by experts with richer inference.",
"Data Acquisition We constructed a table-to-text dataset based on numerical tables of experimental results, extracted from PDF files of scientific papers on the ACL Anthology website, 1 introduced",
"1 https://www.aclweb.org/anthology/",
"by Suadaa et al. (2021).",
"Then, we collected candidates for corresponding descriptions from the source files using PDFMiner.",
"2 We used table numbers in their captions as keywords for the collection.",
"An example of a table and its description is shown in Figure",
"1. Data Cleansing and Annotation Extracted table descriptions can be noisy since they may contain only table numbers without any sentences describing table facts.",
"We hired experts in the computer science field to clean and annotate the extracted descriptions in the following steps: Examine tables and their corresponding descriptions and then recommend only the descriptions that have at least one sentence representing numerical facts in the table.",
"Categorize each sentence of the recommended description into three fact-checking classes: data description, supporting description, and not-related-to-table description.",
"As a final dataset, we used only sentences classified as belonging to the data description category to reduce fact hallucination.",
"Identify a content plan of table description by selecting part of table headers which directly stated or logically inferred in the description, called target header.",
"For example, refer to the table description shown in Figure 1, Our full model is selected as the target header.",
"Table 1 provides a comparison of numericNLG with other related table-to-text datasets.",
"The ROTOWIRE (Wiseman et al., 2017) dataset consists of summaries of NBA basketball games containing several paragraphs, paired with their corresponding box-score tables.",
"Since ROTOWIRE has only 39 record types, each table contains similar record types with limited schemas.",
"Although most of the ROTOWIRE table contents are in numerical values, the summaries contain only a few numerical-reasoning sentences, such as a comparison of scores between two basketball teams.",
"While our dataset consists of closed domain articles as 2 http://pypi.python.org/pypi/pdfminer/ Tables Examples Unit of Desc.",
"with ROTOWIRE, it is of shorter text (a paragraph) and with unlimited table schemas.",
"Chen et al. (2020a) introduced the LOGICNLG dataset to facilitate the study of table-to-text generation tasks with richer inference.",
"The dataset contains unlimited schemas of open-domain tables crawled from Wikipedia, paired with five annotated sentences covering different logical inferences.",
"Although most inferences are numerical reasoning, the table contents are not fully numeric.",
"Similar in motivation to LOGICNLG in generating text that can be logically entailed by facts in tables, numericNLG consists of collections of paragraphs that are naturally produced by human experts in scientific papers, paired with their corresponding numerical tables.",
"Our dataset has fewer tables than LOGICNLG, focusing on numerical-reasoning text in the scientific domain.",
"Due to ROTOWIRE's limited schemas, Wiseman et al. (2017) viewed a table input as a set of records (entity, value, type), where the entity and the type are the extracted row and column names, respectively.",
"Because of the unlimited table schemas in our dataset, by capturing the original table structure in real-world tables, this paper uses the representations which consist of captions, row headers, column headers, cell values, and metrics, called a data table.",
"Using only descriptive facts from the data table as input representations is sufficient to generate descriptive texts that explicitly mention facts in the table.",
"However, since we intend to produce more analytical text with numerical reasoning, we propose adding inferred facts to the input representation by computing a set of arithmetic operations on the data table beforehand, defined as a pre-executed operation table.",
"Data Table We view T as a set of cells with their corresponding row header ( rh ), column header ( ch ), numerical value ( val ), and metric-type ( m ), defined as a data table ( TD ).",
"A data table for the example in Figure 1 consists of rh : ((model, our full model), (model, lee et al. (2018))); ch : (); val : (( 89 . 6 , 82 . 2 , 85 . 7 ), ( 86 . 2 , 83 . 7 , 84 . 9 ); and m : (precision, recall, f1).",
"Since our tables are annotated with a targeted header as a content plan for table descriptions, we mark cells corresponding to the targeted header with a target flag ( tgt ) to highlight the marked cells in text generation.",
"We set tgt = 1 for targeted cells and tgt = 0 for non-targeted cells.",
"In this study, we preprocess the header name by concatenating the row and column headers ( h = [ rh ; ch ] ) and keep information about the header category by extracting overlapping tokens of row and column headers as th .",
"As a result, we define TD = ( h ij , th ij , val ij , m ij , tgt ij ) , where 1 i n r , 1 j n c ; n r and n c are the numbers of rows and columns, respectively.",
"Pre-executed Operation Table We provide a table of pre-executed cell operations ( TOP ) by doing mathematical operations only on targeted cells to limit the calculation.",
"In this study, we cover maximum, minimum, and difference operations.",
"Examples of a preprocessed table, data table, and pre-executed operation table are shown in Figure",
"2. Linearized Table Supporting transfer learning of pre-trained transformers to our table-to-text generation task, we prepare a linearized table PT as an input representation so that it similar to the representation that encoder has seen during pre-training.",
"T is converted to a flat string PT = w 1 , ..., w | PT | , similar to that used in many prior work (Wang et al., 2020; Chen et al., 2020a; Kale and Rastogi, 2020b), where w i denotes the i -th word in paragraph PT with length | PT | .",
"In this study, we adopt the template-based input representation, introduced by Kale and Rastogi (2020a), to handle representation bias between a structured data T and a natural language utterance PT , where PT is generated using a manually defined template.",
"We propose not only covering data table TD in the template but also injecting the pre-executed numerical operations of table T through TOP to guide numerical-reasoning-based text generation.",
"We consider four different methods 3 for converting T into sequences, the last two being our contributions.",
"3 An example is shown in Table 6 in the appendix.",
"1. Naive Representation T is simply flattened into a sequence ignoring its table structure by concatenating captions, headers, metrics, and targeted cell values: caption: <table id> <caption> .",
"row name: <rh 1 > . . . <rh nr > .",
"column name: <ch 1 > . . . <ch nc > .",
"metric: <m 1 > , ..., <m nr/nc > .",
"value: <val 1 .",
"1 > . . . <val nr.nc > .",
"2. Data-based Template ( TD temp) T is transformed into a natural language sentence by scanning each row of TD with tgt = 1 to fill a manually defined template: <table id> shows <caption> .",
"<m 1 .",
"1 > of <h 1 .",
"1 > is <val 1 .",
"1 > . . . <m nr.nc > of <h nr.nc > is <val nr.nc > .",
"This representation covers the semantics of data in the original table.",
"3. Reasoning-based Template ( TOP temp) Mathematical operation arguments and results from TOP are injected in this representation to cover the numerical reasoning of data in the original table.",
"This naive representation omits the relation between rows and columns.",
"Note that <table id> is extracted from the caption to support table mentioning in generating table descriptions.",
"We define h op and val op as a header and a value of an operation result respectively, where op = { max, min, diff } .",
"Specific to the difference operation, h diff 1 and h diff 2 refer to the first and second header arguments, respectively.",
"Then, T is represented by concatenating the templatized representation for each row of TOP : <table id> shows <caption> .",
"<h max > has the largest <m max > ( <val max > ) of <th max > .",
"<h min > has the smallest <m min > ( <val max > ) of <th min > .",
"<m diff > of <h diff 1 > is larger/smaller than <h diff 2 >.",
"4. Data and Reasoning-based Template ( TD + TOP temp) T is converted by combining templatized sentences of TD and TOP .",
"This representation covers both data and their numerical reasoning.",
"The task is to generate text by translating table representation PT into table description Y = y 1 , y 2 , ..., y n .",
"We apply a series of generation models to solve the proposed task.",
"While our focus is primarily on pre-trained models since they have been most widely used for limited data settings, <table_id> shows that <header_max> achieves higher <metric_max> and <metric_max> score.",
"like ours, we also include a template-based generator and a pointer-generator network as baselines.",
"Template-based Generator We design a domain-specific template-based generator covering two types of sentences in producing table descriptions: table referring sentences and data description sentences.",
"Since our task focuses on numerical-reasoning descriptions, we define templatized sentences using maximum records in table TOP : <table id> shows <caption> .",
"we can see that <h max > outperforms other <th max > with <val max > of <m max > .",
"Pointer-Generator Pointer-generator (See et al., 2017) is a sequence-to-sequence model with attention and a copy mechanism.",
"This model copes with the out-of-vocabulary problem in data-to-text generation by jointly copying from source texts and generating from a vocabulary.",
"Fine-tuned GPT2 GPT2 (Radford et al., 2019) is a pre-trained language model with a decoder-only transformer architecture.",
"We fine-tuned the GPT2 model by using table representation PT as a prefix of our input.",
"Specifically, we fed the concatenation of table representation PT and table description Y to the model and generated Y .",
"In the inference phase, we used only PT as the input to generate Y starting after the last token of PT .",
"Fine-tuned T5 T5 (Raffel et al., 2020) is a pre-trained transformer model with an encoder-decoder architecture, that solves natural language tasks by converting into a text-to-text format.",
"We fine-tuned the T5 model in our dataset by adding a summa-rize prefix to table representation PT producing output Y .",
"Copy Mechanism Pre-trained language models have proven their effectiveness in handling the open vocabulary problem through subword tokenization.",
"Supported by attention layers of the transformer in their architecture, the models learn to attend to source inputs while generating target texts in subword units.",
"However, pre-trained generators often produce texts that are not aligned to table sources.",
"In this study, we propose strengthening their copying ability by incorporating a copy mechanism into the pre-trained models.",
"Although a copy mechanism based on pointer-generator (See et al., 2017) was used for pre-trained models (Chen et al., 2020c) and is well-known in the community, it cannot maintain the global logical structure of sentences with richer inference.",
"We instead employed a simpler copy mechanism based on placeholders (Murakami et al., 2017) with more specific tags than in Chen et al. (2020a).",
"We further propose a ranking-based placeholder alignment algorithm, as illustrated in Figure",
"3. First, we align entities and numbers in Y with the data tables TD and pre-executed arithmetic operation results TOP by using string matching.",
"The alignment starts from the first row to the last row of TOP .",
"If no matched token is found, it continues Model BLEU ROUGE-L METEOR BERTSCOREPARENT Template-based 2.82 26.97 15.82 86.88 17.15 Pointer-generator (naive) 2.80 15.26 7.82 76.38 1.40 Fine-tuned GPT2 (naive) 3.06 23.7 18.84 85.12 6.56 Fine-tuned GPT2 ( TD temp) 3.01 22.97 *17.10 *84.68 6.53 Fine-tuned GPT2 ( TOP temp) 4.63 *25.39 18.85 *85.66 7.72 Fine-tuned GPT2 ( TD + TOP temp) 5.05 *25.13 19.14 *85.40 8.05 Fine-tuned GPT2 (naive) + Copy 1.29 *11.66 *6.94 *78.73 *2.45 Fine-tuned GPT2 ( TD temp) + Copy 1.36 *11.23 *6.43 *77.76 *2.10 Fine-tuned GPT2 ( TOP temp) + Copy 1.18 *9.40 *4.42 *73.83 *0.91 Fine-tuned GPT2 ( TD + TOP temp) + Copy 1.22 *9.62 *5.47 *70.87 *1.55 Fine-tuned T5 (naive) 4.25 29.71 18.94 87.64 13.09 Fine-tuned T5 ( TD temp) 5.02 30.25 *20.11 87.68 15.09 Fine-tuned T5 ( TOP temp) 4.99 28.63 18.85 *87.17 12.25 Fine-tuned T5 ( TD + TOP temp) 4.83 29.13 18.46 87.34 12.78 Fine-tuned T5 (naive) + Copy 5.14 *27.40 18.49 *86.37 *12.47 Fine-tuned T5 ( TD temp) + Copy 4.96 *27.08 18.23 *86.12 *11.65 Fine-tuned T5 ( TOP temp) + Copy 5.24 *28.02 18.68 *86.52 *11.96 Fine-tuned T5 ( TD + TOP temp) + Copy 5.45 *28.15 19.16 *86.54 *12.95 Table 2: Experimental results of different models with various types of table representations and proposed copy mechanism.",
"to the rows of TD .",
"We set a higher rank to TOP than TD in the alignment since we focus on logical text generation.",
"Then, we replace the matched tokens with corresponding placeholders 4 in a templatized description Y temp .",
"As depicted in Figure 3, since our full model in sentence Y is matched with the header result of the maximum operation, we replace it with <header max> placeholder.",
"During the fine-tuning phase, instead of directly generating Y , the models learn to produce a templatized description Y temp including placeholders as well as words.",
"In the inference phase, we design a ranking algorithm with a placeholder memory to select the best-replaced tokens for placeholders of a predicted templatized description Y temp in producing a generated description Y .",
"We define a set of values in the same row of source tables as a content set and prioritize replacing placeholders in one sentence with the same content set, ensuring sentence coherence.",
"A content set of TD is a tuple of header, metric, and value.",
"For TOP , a content set consists of header, metric, and value of the operation results.",
"Specific to the difference operation, we add the header of the first and second arguments to the content set since the header arguments are important to capture entity comparison in a sentence.",
"example, as shown in Figure 3, after replacing the header max placeholder with the header result from the first row of maximum records of TOP in Step 1, the related placeholders from the same content set ( metric max and value max ) are added to the placeholder memory as higher-ranked candidates in the searching space.",
"The placeholder memory is reset to empty in the following sentence of Y temp and the alignment starts again from the next content set of table sources.",
"We conducted experiments on the proposed dataset to evaluate the performance of the text generation models and verify the effectiveness of the approach of using different table representations.",
"We used BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) to evaluate the informativeness of generated texts.",
"We computed the BERTSCORE (Zhang et al., 2020) to assess the similarity between the generated texts and the ground-truth table descriptions by using contextualized token embeddings of pre-trained BERT (Devlin et al., 2019), which have been shown to be effective for paraphrase detection.",
"Considering both references and table contents, we also used the PARENT metric, proposed by Dhin-gra et al. (2019).",
"In our experiments, we modified the PARENT calculation by adding noun phrases of table captions as table contents and used only targeted table contents for table sources.",
"We trained a pointer-generator model using the Adagrad optimizer with a batch size of 8 and a learning rate of 0 .",
"15 .",
"For fine-tuning the GPT2 model, the Adam optimizer set weight decay to 3 10 5 .",
"Following Raffel et al. (2020), the T5 model was fine-tuned with a constant learning rate of 0.001.",
"We trained all models for a maximum of ten epochs with early stopping based on the loss score on the validation set (patience of 3).",
"At the time of decoding, the generated text was produced through a beam search of size",
"5. 7 Results 7.1 Automatic Evaluation Table 2 shows our experimental results.",
"The fine-tuned T5 models performed better than the others in terms of BLEU, ROUGE-L, METEOR, and BERTSCORE .",
"The slightly lower PARENT of the best fine-tuned T5 model than the template-based generator implies that the fine-tuned T5 model was also comparable in terms of generating related table descriptions.",
"The pointer-generator model had the lowest score since our dataset consists of limited table collections with a broad vocabulary and challenging target texts.",
"Effect of table representation Comparing the performance between table representation types in the pre-trained models, we can see a different tendency between GPT2 and T5.",
"The more similar the table representation used as an input, the higher the score of GPT2.",
"Since GPT2 had only a decoder, the inputs including reasoning-based templates ( TOP and TD + TOP ), which are more similar to our target with numerical reasoning, performed the best for several metrics with more than 1 point improvement.",
"In T5 with an encoder-decoder architecture, on the contrary, there was only a slight margin between different table representations.",
"This indicates that the encoder part of T5 can capture table contexts from various input templates.",
"For variants without a copy mechanism, T5 with only data representation ( TD ) outperformed the other representation types with longer sentences for all metrics.",
"Because of the gap between the encoder and decoder, T5 still had difficulty aligning the information of longer inputs and outputs.",
"Effect of copy mechanism The worst scores of the fine-tuned GPT2+copy models indicate that our proposed copy mechanism failed to learn the templatized target patterns in the fine-tuning step.",
"The decoder-only GPT2 could not handle the sparse distributions of target texts with placeholders.",
"Conversely, the copy-based fine-tuned T5 models achieved a better BLEU score due to their encoder and decoder ability in handling output texts with placeholders.",
"Table 3 shows table descriptions generated by the template-based, pointer-generator, and fine-tuned pre-trained models (GPT2 and T5), using data and reasoning-based templates 5 for our table example in Figure",
"2. We marked sentences related to table captions in green, correct facts based on table contents in blue, and incorrect facts in red.",
"In this study, since we had a limited training set with a broader vocabulary, the pointer-generator model tended to result in repetitive words and failed to generate well-described descriptions.",
"The pre-trained models, GPT2 and T5, generated more natural descriptions.",
"While several pieces of text generated by GPT2 included numerical facts, they used numbers that were not extracted from table contents.",
"The T5 models produced descriptions that were more related to table contents than GPT2.",
"Considering our lengthy output examples in Table 3, unlike the fine-tuned GPT2 model, which generated longer sentences, the fine-tuned T5 model generated shorter sentences than the references.",
"6 The length gap between the references and outputs of the fine-tuned T5 model affected the F1-based metrics of ROUGE-L, METEOR, BERTSCORE , and PARENT.",
"Note that BLEU is a precision-based metric that can handle shorter outputs through a brevity penalty (Papineni et al., 2002).",
"Therefore, we assume that BLEU better represents the performance of the fine-tuned T5 model than the other metrics.",
"We conducted a human evaluation 7 to better assess the quality of the generated text.",
"We compared our copy-based fine-tuned T5 model with 5 Examples using other table representations are shown in Table 9 in the appendix.",
"6 Average token length of references: 80.57, GPT2: 87.39, GPT2+copy: 73.58, T5: 39.81, T5+copy: 41.81.",
"the template-based, pointer-generator, fine-tuned GPT2, and fine-tuned T5 models.",
"We did not compare it against the copy-based fine-tuned GPT2 since GPT2 failed to incorporate our proposed copy mechanism.",
"We used the best table representation with majority metrics for each model on the basis of the experimental results in Table",
"2. In the first study, we evaluated the correctness of the generated text on the basis of facts in tables.",
"We randomly selected 30 tables in the test set and elicited responses from three graduate students per table.",
"Following Wiseman et al. (2017), the raters were asked to count how many facts in the descriptions were supported by numerical data in the tables and how many were contradicted.",
"Since our task covers numerical-reasoning text, we distinguished descriptive numerical facts from inferred numerical facts.",
"We also measured the level of relevance of the generated text to the table captions by using a four-point Likert scale (highly relevant, relevant, somewhat relevant, and irrelevant).",
"The results are shown in Table",
"4. The pointer-generator failed to reflect facts due to the wide variety of our table schemas.",
"While the fine-tuned GPT2 model generated sentences with a larger number of descriptive and inferred facts than the others on average, most of the facts were contradictive.",
"The fine-tuned T5 model generated fewer sentences than GPT2, with the average number of inferred facts being larger than that of descriptive facts.",
"Our model based on the fine-tuned T5 model with a copy mechanism reduced the ratio of contradictive facts for both descriptive and inferred facts.",
"Following earlier work (Puduppully et al., 2019), we also evaluated text fluency in terms of grammaticality, coherence, and conciseness by using best-worst scaling (BWS) (Louviere and Woodworth, 1991; Louviere et al., 2015).",
"We divided the outputs of the five models into ten pairs of descriptions.",
"We presented workers with two descriptions and asked them to decide which one is best for each fluency category.",
"The score of each model was calculated by using the MaxDiff approach (Orme, 2009): the number of times a description was chosen as the best minus the number of times it was chosen as the worst.",
"Scores range from 100 (absolutely worst) to 100 (absolutely best).",
"We elicited judgments with Amazon Mechanical Turk for the 30 descriptions, rated by 3 participants.",
"The results are shown in Table",
"5. Most of the pre-trained models achieved better scores than the others.",
"The fine-tuned GPT2 model achieved the highest score in terms of grammaticality and coherence.",
"The fine-tuned T5 model achieved the highest score in terms of conciseness.",
"Adding a copy mechanism to the T5 slightly decreased the grammaticality and conciseness but improved the coherence.",
"We proposed numericNLG, a new dataset for table-to-text generation using a table and its corresponding description from scientific papers, focusing on numerical-reasoning texts.",
"Even though our proposed dataset is not a large-scale table collection, we provided pairs of a table and its rich inference description, that are naturally written by experts in scientific papers, supporting further research on table-to-text generation with numerical reasoning.",
"We conducted experiments with fine-tuned pre-trained models by using several types of table linearization as input representations, comparing with a template-based generator and pointer-generator.",
"The experiments showed that transfer-learning of pre-trained language models leads to an improvement in our settings, that resulted in more fluent text while it still lacked fidelity to table contents.",
"We then proposed incorporating a copy mechanism by using general placeholders to avoid the production of hallucinated phrases, that are not supported by tables while preserving high fluency.",
"Even though our proposed copy mechanism failed to learn to generate better outputs in the decoder-only pre-trained models, we showed that a copy-based pre-trained model with an encoder-decoder architecture leads to a better BLEU score and improves correctness.",
"Lya Hulliyyatus Suadaa is supported by the Indonesian Endowment Fund for Education (LPDP) and the Okumura-Takamura-Funakoshi Laboratory, Tokyo Institute of Technology.",
"This work is partially supported by JST PRESTO (Grant Number JPMJPR1655).",
"We thank the anonymous reviewers for their helpful discussion on this work and comments on the previous draft of the paper."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"other",
"abstain",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"The importance of building semantic parsers which can be applied to new domains and generate programs unseen at training has long been acknowledged, and datasets testing out-of-domain performance are becoming increasingly available.",
"However, little or no attention has been devoted to learning algorithms or objectives which promote domain generalization, with virtually all existing approaches relying on standard supervised learning.",
"In this work, we use a meta-learning framework which targets zero-shot domain generalization for semantic parsing.",
"We apply a model-agnostic training algorithm that simulates zero-shot parsing by constructing virtual train and test sets from disjoint domains.",
"The learning objective capitalizes on the intuition that gradient steps that improve source-domain performance should also improve target-domain performance, thus encouraging a parser to generalize to unseen target domains.",
"Experimental results on the (English) Spider and Chinese Spider datasets show that the meta-learning objective significantly boosts the performance of a baseline parser.",
"Semantic parsing is the task of mapping natural language (NL) utterances to executable programs.",
"While there has been much progress in this area, earlier work has primarily focused on evaluating parsers in-domain (e.g., tables or databases) and often with the same programs as those provided in training (Finegan-Dollak et al., 2018).",
"A much more challenging goal is achieving domain generalization , i.e., building parsers which can be successfully applied to new domains and are able to produce complex unseen programs.",
"Achieving this generalization goal would, in principle, let users query arbitrary (semi-)structured data on the Web and reduce the annotation effort required to build multi-domain NL interfaces (e.g., Apple database: farm Please show the different statuses of cities and the average population of cities with each status .",
"SELECT Status , avg( Population ) FROM City GROUP BY Status database: concert singer Show all countries and the number of singers in each country .",
"SELECT Country , count(*) FROM Singer GROUP BY Country Test Train Figure 1: Zero-shot semantic parsing: at training time, a parser observes instances for the database concert singer .",
"Siri or Amazon Alexa).",
"Current parsers struggle in this setting; for example, we show in Section 5 that a modern parser trained on the challenging Spider dataset (Yu et al., 2018b) has a gap of more than 25% in accuracy between inand out-of-domain performance.",
"While the importance of domain generalization has been previously acknowledged (Cai and Yates, 2013; Chang et al., 2020), and datasets targetting zero-shot (or out-of-domain) performance are becoming increasingly available (Pasupat and Liang, 2015; Wang et al., 2015; Zhong et al., 2017; Yu et al., 2018b), little or no attention has been devoted to studying learning algorithms or objectives which promote domain generalization.",
"Conventional supervised learning simply assumes that sourceand target-domain data originate from the same distribution, and as a result struggles to capture this notion of domain generalization for zero-shot semantic parsing.",
"Previous approaches (Guo et al., 2019b; Wang et al., 2020; Herzig and Berant, 2018) facilitate domain generalization by incorporating inductive biases in the model, e.g., designing linking features or functions which should be invariant under domain shifts.",
"In this work, we take a different direction and improve the domain generalization of a semantic parser by modifying the learning algorithm and the objective.",
"We draw inspiration from meta-learning (Finn et al., 2017; Li et al., 2018a) and use an objective that optimizes for domain generalization.",
"That is, we consider a set of tasks, where each task is a zero-shot semantic parsing task with its own source and target domains.",
"By optimizing towards better target-domain performance on each task, we encourage a parser to extrapolate from source-domain data and achieve better domain generalization.",
"Specifically, we focus on text-to-SQL parsing where we aim at translating NL questions to SQL queries and conduct evaluations on unseen databases.",
"Consider the example in Figure 1, a parser needs to process questions to a new database at test time.",
"To simulate this scenario during training, we synthesize a set of virtual zero-shot parsing tasks by sampling disjoint source and target domains 1 for each task from the training domains.",
"The objective we require is that gradient steps computed towards better source-domain performance would also be beneficial to target-domain performance.",
"One can think of the objective as consisting of both the loss on the source domain (as in standard supervised learning) and a regularizer, equal to the dot product between gradients computed on sourceand target-domain data.",
"Maximizing this regularizer favours finding model parameters that work not only on the source domain but also generalize to target-domain data.",
"The objective is borrowed from Li et al. (2018a) who adapt a Model-Agnostic Meta-Learning (MAML; Finn et al. 2017) technique for domain generalization in computer vision.",
"In this work, we study the effectiveness of this objective in the context of semantic parsing.",
"This objective is model-agnostic, simple to incorporate and does not require any changes in the parsing model itself.",
"Moreover, it does not introduce new parameters for meta-learning.",
"We handle zero-shot semantic parsing by applying a meta-learning objective that directly optimizes for domain generalization.",
"We propose an approximation of the meta-learning objective that is more efficient and allows more scalable training.",
"new training objectives obtain significant improvements in accuracy over a baseline parser trained with conventional supervised learning.",
"We show that even when parsers are augmented with pre-trained models, e.g., BERT, our method can still effectively improve domain generalization in terms of accuracy.",
"Our code is available at https://github.",
"com/berlino/tensor2struct-public .",
"Zero-Shot Semantic Parsing Developing a parser that can generalize to unseen domains has attracted increased attention in recent years.",
"Previous work has mainly focused on the sub-task of schema linking as a means of promoting domain generalization.",
"In schema linking, we need to recognize which columns or tables are mentioned in a question.",
"For example, a parser would decide to select the column Status because of the word statuses in Figure 1.",
"However, in the setting of zero-shot parsing, columns or tables might be mentioned in a question without ever being observed during training.",
"One line of work tries to incorporate inductive biases, e.g., domain-invariant n-gram matching features (Guo et al., 2019b; Wang et al., 2020), cross-domain alignment functions (Herzig and Be-rant, 2018), or auxiliary linking tasks (Chang et al., 2020) to improve schema linking.",
"However, in the cross-lingual setting of Chinese Spider (Min et al., 2019), where questions and schemas are not in the same language, it is not obvious how to design such inductive biases like n-gram matching features.",
"Another line of work relies on large-scale unsupervised pre-training on massive tables (Herzig et al., 2020; Yin et al., 2020) to obtain better representations for both questions and database schemas.",
"Our work is orthogonal to these approaches and can be easily coupled with them.",
"As an example, we show in Section 5 that our training procedure can improve the performance of a parser already enhanced with n-gram matching features (Guo et al., 2019b; Wang et al., 2020).",
"Our work is similar in spirit to Givoli and Reichart (2019), who also attempts to simulate source and target domains during learning.",
"However, their optimization updates on virtual source and target domains are loosely connected by a two-step training procedure where a parser is first pre-trained on virtual source domains and then fine-tuned on virtual target domains.",
"As we will show in Section 3, our training procedure does not fine-tune on virtual target domains but rather, uses them to evaluate a gradient step (for every batch) on source domains.",
"This is better aligned with what is expected of the parser at test time: there will be no fine-tuning on real target domains at test time so there should not be any fine-tuning on simulated ones at train time either.",
"Moreover, Givoli and Reichart (2019) treat the division of training domains to virtual train and test domains as a hyper-parameter, which is possible for a handful of domains, but problematic when dealing with hundreds of domains as is the case for text-to-SQL parsing.",
"Meta-Learning for NLP Meta-learning has been receiving soaring interest in the machine learning community.",
"Unlike conventional supervised learning, meta-learning operates on tasks, instead of data points.",
"Most previous work (Vinyals et al., 2016; Ravi and Larochelle, 2017; Finn et al., 2017) has focused on few-shot learning where meta-learning helps address the problem of learning to learn fast for adaptation to a new task or domain.",
"Applications of meta-learning in NLP are cast in a similar vein and include machine translation (Gu et al., 2018) and relation classification (Obamuyide and Vlachos, 2019).",
"The meta-learning framework however is more general, with the algorithms or underlying ideas applied, e.g., to continual learning (Gupta et al., 2020), semi-supervised learning (Ren et al., 2018), multi-task learning (Yu et al., 2020) and, as in our case, domain generalization (Li et al., 2018a).",
"Very recently, there have been some applications of MAML to semantic parsing tasks (Huang et al., 2018; Guo et al., 2019a; Sun et al., 2019).",
"These approaches simulate few-shot learning scenarios in training by constructing a pseudo-task for each example.",
"Given an example, similar examples are retrieved from the original training set.",
"MAML then encourages strong performance on the retrieved examples after an update on the original example, simulating test-time fine-tuning.",
"Lee et al. (2019) use matching networks (Vinyals et al., 2016) to enable one-shot text-to-SQL parsing where tasks for meta-learning are defined by SQL templates, i.e., a parser is expected to generalize to a new SQL template with one example.",
"In contrast, the tasks we construct for meta-learning aim to encourage generalization across domains, instead of adaptation to a new task with one (or few) examples.",
"One clear difference lies in how meta-train and meta-test sets are constructed.",
"In previous work (e.g., Huang et al. 2018), these come from the same domain whereas we simulate domain shift and sample different sets of domains for meta-train and meta-test.",
"Domain Generalization Although the notion of domain generalization has been less explored in semantic parsing, it has been studied in other areas such as computer vision (Ghifary et al., 2015; Zaheer et al., 2017; Li et al., 2018b).",
"Recent work (Li et al., 2018a; Balaji et al., 2018) employed optimization-based meta-learning to handle domain shift issues in domain generalization.",
"We employ the meta-learning objective originally proposed in Li et al. (2018a), where they adapt MAML to encourage generalization in unseen domains (of images).",
"Based on this objective, we propose a cheap alternative that only requires first-order gradients, thus alleviating the overhead of computing second-order derivatives required by MAML.",
"We first formally define the problem of domain generalization in the context of zero-shot text-to-SQL parsing.",
"Then, we introduce DG-MAML, a training algorithm that helps a parser achieve better domain generalization.",
"Finally, we propose a computationally cheap approximation thereof.",
"Domain Generalization Given a natural language question Q in the context of a relational database D , we aim at generating the corresponding SQLP .",
"In the setting of zero-shot parsing, we have a set of source domains D s where labeled question-SQL pairs are available.",
"We aim at developing a parser that can perform well on a set of unseen target domains D t .",
"We refer to this problem as domain generalization .",
"Parsing Model We assume a parameterized parsing model that specifies a predictive distribution p ( P | Q, D ) over all possible SQLs.",
"For domain generalization, a parsing model needs to properly condition on its input of questions and databases such that it can generalize well to unseen domains.",
"that question-SQL pairs from source domains and target domains are sampled i.i.d from the same",
"distribution, the typical training objective of supervised learning is to minimize the loss function of the negative log-likelihood of the gold SQL query:",
"where N is the size of mini-batch B .",
"Since a mini-batch is randomly sampled from all training source domains D s , it usually contains question-SQL pairs from a mixture of different domains.",
"Distribution of Tasks Instead of treating semantic parsing as a conventional supervised learning problem, we take an alternative view based on meta-learning.",
"Basically, wea re interested in a learning algorithm that can benefit from a distribution of choices of source and target domains, denoted by p ( ) , where refers to an instance of a zero-shot semantic parsing task that has its own source and target domains.",
"In practice, we usually have a fixed set of training source domains D s .",
"We construct a set of virtual tasks by randomly sampling disjoint source and target domains from the training domains.",
"Intuitively, we assume that divergences between the test and training domains during the learning phase are representative of differences between training and actual test domains.",
"This is still an assumption, but considerably weaker compared to the i.i.d. assumption used in conventional supervised learning.",
"Next, we introduce the training algorithm called DG-MAML motivated by this assumption.",
"Having simulated source and target domains for each virtual task, we now need a training algorithm that encourages generalization to unseen target domains in each task.",
"For this, we turn to optimization-based meta-learning algorithms (Finn et al., 2017; Nichol et al., 2018; Li et al., 2018a) and apply DG-MAML (Domain Generalization with Model-Agnostic Meta-Learning), a variant of MAML (Finn et al., 2017) for this purpose.",
"Intuitively, DG-MAML encourages the optimization in the source domain to have a positive effect on the target domain as well.",
"During each learning episode of DG-MAML, we randomly sample a task which has its own source domain D s and target domain D t .",
"For the sake of efficiency, we randomly sample mini-batch question-SQL pairs B s and B t from D s and D t , respectively, for learning in each task.",
"DG-MAML conducts optimization in two steps, namely meta-train and meta-test .",
"Meta-Train DG-MAML first optimizes parameters towards better performance in the virtual source domain D s by taking one step of stochastic gradient descent (SGD) from the loss under B s .",
"where is a scalar denoting the learning rate of meta-train.",
"This step resembles conventional supervised learning where we use stochastic gradient descent to optimize the parameters.",
"Meta-Test We then evaluate the resulting parameter (cid:48) in the virtual target domain D t by computing the loss under B t , which is denoted as LB t ( (cid:48) ) .",
"Our final objective for a task is to minimize the joint loss on D s and D t : L ( ) = LB s ( ) + LB t ( (cid:48) ) = LB s ( ) + LB t ( LB s ( )) (3) where we optimize towards the better source and target domain performance simultaneously.",
"Intuitively, the objective requires that the gradient step conducted in the source domains in Equation (2) be beneficial to the performance of the target domain as well.",
"In comparison, conventional supervised learning, whose objective would be equivalent to LB s ( ) + LB t ( ) , does not pose any constraint on the gradient updates.",
"As we will elaborate shortly, DG-MAML can be viewed as a regularization of gradient updates in addition to the objective of conventional supervised learning.",
"We summarize our DG-MAML training process in Algorithm 1.",
"Basically, it requires two steps of gradient update (Step 5 and Step 7).",
"Note that (cid:48) is a function of after the meta-train update.",
"Hence, optimizing L ( ) with respect to involves optimizing the gradient update in Equation (2) as well.",
"That is, when we update the parameters in the final update of Step 7, the gradients need to back-propagate though the meta-train updates in Step 5.",
"The update function in Step 7 could be based on any gradient descent algorithm.",
"In this work we use Adam (Kingma and Ba, 2015).",
"Comment Note that DG-MAML is different from MAML (Finn et al., 2017) which is typically used in the context of few-shot learning.",
"In our case, it encourages domain generalization during training, and does not require an adaptation phase.",
"To give an intuition of the objective in Equation (3), we follow previous work (Nichol et al., 2018; Li et al., 2018a) and use the first-order Taylor series expansion to approximate it:",
"L ( ) = LB s ( ) + LB t ( (cid:48) ) = LB s ( ) + LB t ( LB s ( )) L B s ( ) + LB t ( ) ( LB s ( ) LB t ( )) (4)",
"where in the last step we expand the function LB s at .",
"The approximated objective sheds light on what DG-MAML optimizes.",
"In addition to minimizing the losses from both source and target domains, which are LB s ( ) + LB t ( ) , DG-MAML further tries to maximize LB s ( ) LB t ( ) , the dot product between the gradients of source and target domain.",
"That is, it encourages gradients to generalize between source and target domain within each task .",
"The final update in Step 7 of Algorithm 1 requires second-order derivatives, which may be problematic, inefficient or non-stable with certain classes of models (Mensch and Blondel, 2018).",
"Hence, we propose an approximation that only requires computing first-order derivatives.",
"L ( ) = (cid:48) (cid:48) LB t ( (cid:48) ) + LB s ( ) = (cid:0) I 2 LB s ( ) (cid:1) (cid:48) LB t ( (cid:48) ) + LB s ( ) (5)",
"where I is an identity matrix and 2 LB s ( ) is the Hessian of LB s at .",
"We consider the alternative of ignoring this second-order term and simply assume that (cid:48) = I .",
"In this variant, we simply combine gradients from source and target domains.",
"We show in the Appendix that this objective can still be viewed as maximizing the dot product of gradients from source and target domain.",
"The resulting first-order training objective, which we refer to as DG-FMAML, is inspired by Reptile, a first-order meta-learning algorithm (Nichol et al., 2018) for few-shot learning.",
"A two-step Reptile would compute SGD on the same batch twice while DG-FMAML computes SGD on two different batches, B s and B t , once.",
"To put it differently, DG-FMAML tries to encourage cross-domain generalization while Reptile encourages in-domain generalization.",
"In general, DG-MAML is model-agnostic and can be coupled with any semantic parser to improve its domain generalization.",
"In this work, we use a base parser that is based on RAT-SQL (Wang et al., 2020), which currently achieves state-of-the-art performance on Spider.",
"2 Formally, RAT-SQL takes as input question Q and schema S of its corresponding database.",
"Then it produces a program which is represented as an abstract syntax tree T in the context-free grammar of SQL (Yin and Neubig, 2018).",
"RAT-SQL adopts the encoder-decoder framework for text-to-SQL parsing.",
"It has three components: an initial encoder, a transformer-based encoder and an LSTM-based decoder.",
"The initial encoder provides initial representations, denoted as Q init and S init for the question and the schema, respectively.",
"A relation-aware transformer (RAT) module then takes the initial representations and further computes context-aware representations Q enc and S enc for the question and the schema, respectively.",
"Finally, a decoder generates a sequence of production rules that constitute the abstract syntax tree T based on Q enc and S enc .",
"To obtain Q init and S init , the initial encoder could either be",
"1) LSTMs (Hochreiter and Schmidhuber, 1997) on top of pre-trained word embeddings, like GloVe (Pennington et al., 2014), or 2) pre-trained contextual embeddings like BERT (Devlin et al., 2 We re-implemented RAT-SQL, and added a component for value prediction so that our base parsers can be evaluated by execution accuracy. 2019).",
"of our method for both variants.",
"As shown in Wang et al. (2020), the encodings Q enc and S enc , which are the output of the RAT module, heavily rely on schema-linking features.",
"These features are extracted from a heuristic function that links question words to columns and tables based on n-gram matching, and they are readily available in the conventional mono-lingual setting of the Spider dataset.",
"However, we hypothesize that the parser's over-reliance on these features is specific to Spider, where annotators were shown the database schema and asked to formulate queries.",
"As a result, they were prone to re-using terms from the schema verbatim in their questions.",
"This would not be the case in a real-world application where users are unfamiliar with the structure of the underlying database and free to use arbitrary terms which would not necessarily match column or table names (Suhr et al., 2020).",
"Hence, we will also evaluate our parser in the cross-lingual setting where Q and S are not in the same language, and such features would not be available.",
"To evaluate DG-MAML, we integrate it with a base parser and test it on zero-shot text-to-SQL tasks.",
"By designing an in-domain benchmark, we also show that the out-of-domain improvement does not come at the cost of in-domain performance.",
"We also present some analysis to show how DG-MAML affects domain generalization.",
"We evaluate DG-MAML on two zero-shot text-to-SQL benchmarks, namely, (English) Spider (Yu et al., 2018b) and Chinese Spider (Min et al., 2019).",
"Chinese Spider is a Chinese version of Spider that translates all NL questions from English to Chinese and keeps the original English database.",
"It introduces the additional challenge of encoding cross-lingual correspondences between Chinese and English.",
"3 In both datasets, we report exact set match accuracy, following Yu et al. (2018b).",
"We also report execution accuracy in the Spider dataset.",
"Two kinds of features are widely used in recent semantic parsers to boost domain generalization:",
"schema-linking features (as mentioned in Section 4) and pre-trained emebddings such as BERT.",
"To show that our method can still achieve additional improvements, we compare with strong baselines that are integrated with schema-linking features and pre-trained embeddings.",
"In the analysis (Sec-tion 5.6), we will also show the effect of our method when both features are absent in the base parsers.",
"Our base parser is based on RAT-SQL (Wang et al., 2020), which is implemented in PyTorch (Paszke et al., 2019).",
"For English questions and schemas, we use GloVe (Pennington et al., 2014) and BERT-base (Devlin et al., 2019) as the pre-trained embeddings for encoding.",
"For Chinese questions, we use Tencent embeddings (Song et al., 2018) and Multilingual-BERT (Devlin et al., 2019).",
"In all experiments, we use a batch size of B s = B t = 12 and train for up to 20,000 steps.",
"See the Appendix for details on other hyperparameters.",
"Our main results on Spider and Chinese Spider are listed in Table 1 and 2, respectively.",
"Non-BERT Models DG-MAML boosts the performance of non-BERT base parsers on Spider and Chinese Spider by 2.1% and 4.5% respectively, showing its effectiveness in promoting domain generalization.",
"In comparison, the performance margin for DG-MAML is more significant in the cross-lingual setting of Chinese Spider.",
"This is presumably due to the fact that heuristic schema-linking features, which help promote domain generalization for Spider, are not applicable in Chinese Spider.",
"We will present more analysis on this in Section 5.6.",
"BERT Models Most importantly, improvements on both datasets are not cancelled out when the base parsers are augmented with pre-trained representations.",
"On Spider, the improvements brought by DG-MAML remain roughly the same when the base parser is integrated with BERT-base.",
"As a result, our base parser augmented with BERT-base and DG-MAML achieves the best execution accuracy compared with previous models.",
"On Chinese Spider, DG-MAML helps the base parser with multilingual BERT achieve a substantial improvement.",
"Overall, DG-MAML consistently boosts the performance of the base parser, and is complementary to using pre-trained representations.",
"To confirm that the base parser struggles when applied out-of-domain, we construct an in-domain setting and measure the gap in performance.",
"This setting also helps us address a natural question: does using DG-MAML hurt in-domain performance?",
"This would not have been surprising as the parser is explicitly optimized towards better performance on unseen target domains.",
"To answer these questions, we create a new split of Spider.",
"Specifically, for each database from the training and development set of Spider, we include 80% of its question-SQL pairs in the new training set and assign the remaining 20% to the new test set.",
"As a result, the new split consists of 7702 training examples and 1991 test examples.",
"When using this split, the parser is tested on databases that all have been seen during training.",
"We evaluate the non-BERT parsers with the same metric of set match for evaluation.",
"ent splits, and thus do not use the same test set, the direct comparison between them only serves as a proxy to illustrate the effect of domain shift.",
"We show that, despite the original split of out-of-domain setting containing a larger number of training examples (8659 vs 7702), the base parser tested in-domain achieves a much better performance (78.2%) than its counterpart tested out-of-domain (56.4%).",
"This suggests that the domain shift genuinely hurts the base parser.",
"Does DG-MAML hurt in-domain performance?",
"We study DG-MAML in the in-domain setting to see if it hurts in-domain performance.",
"Somewhat surprisingly, we instead observe a modest improvement (+1.1%) over the base parser.",
"This suggests that DG-MAML, despite optimizing the model towards domain generalization, captures, to a certain degree, a more general notion of generalization or robustness, which appears beneficial even in the in-domain setting.",
"We first discuss additional experiments on linking features and DG-FMAML, and then present further analysis probing how DG-MAML works.",
"As the test sets for both datasets are not publicly available, we will use the development sets.",
"Linking Features As mentioned in Section 2, previous work addressed domain generalization by focusing on the sub-task of schema linking.",
"For Spider, where questions and schemas are both in English, Wang et al. (2020) leverage n-gram matching features which improve schema linking and significantly boost parsing performance.",
"However, in Chinese Spider, it is not easy and obvious how to design such linking heuristics.",
"Moreover, as pointed out by Suhr et al. (2020), the assumption Model Dev (%) Spider Base Parser 55.6 0.5 + DG-FMAML 56.8 1.2 + DG-MAML 58.0 0.8 Base Parser without Features 38.2 1.0 + DG-FMAML 41.8 1.5 + DG-MAML 43.5 0.9 Chinese Spider Base Parser 29.7 1.1 + DG-FMAML 32.5 1.3 + DG-MAML 34.3 0.9 Table 3: Accuracy (and 95% confidence interval) on the development sets of Spider and Chinese Spider.",
"that columns/tables are explicitly mentioned is not general enough, implying that exploiting matching features would not be a good general solution to domain generalization.",
"Hence, we would like to see whether DG-MAML can be beneficial when those features are not present .",
"Specifically, we consider a variant of the base parser that does not use this feature, and train it with conventional supervised learning and with DG-MAML for Spider.",
"As shown 4 in Table 3, we confirm that those features have a big impact on the base parser.",
"More importantly, in the absence of those features, DG-MAML boosts the performance of the base parser by a larger margin.",
"This is consistent with the observation that DG-MAML is more beneficial for Chinese Spider than Spider, in the sense that the parser would need to rely more on DG-MAML when these heuristics are not integrated or not available for domain generalization.",
"Effect of DG-FMAML We investigate the effect of the first-order approximation in DG-FMAML to see if it would provide a reasonable performance compared with DG-MAML.",
"We evaluate it on the development sets of the two datasets, see Table 3.",
"DG-FMAML consistently boosts the performance of the base parser, although it lags behind DG-MAML.",
"For a fair comparison, we use the same batch size for DG-MAML and DG-FMAML.",
"However, because DG-FMAML uses less memory, it could potentially benefit from a larger batch size.",
"In practice, DG-FMAML is twice faster to train than DG-MAML, see Appendix for details.",
"Probing Domain Generalization Schema linking has been the focus of previous work on zero-shot semantic parsing.",
"We take the opposite direction and use this task to probe the parser to see if it, at least to a certain degree, achieves domain generalization due to improving schema linking.",
"We hypothesize that improving linking is the mechanism which prevents the parser from being trapped in overfitting the source domains .",
"We propose to use relevant column recognition' as a probing task.",
"Specifically, relevant columns refer to the columns that are mentioned in SQL queries.",
"For example, the SQL query Select Status, avg(Population) From City Groupby Status in Figure 1 contains two relevant columns: Status' and Population'.",
"We formalize this task as a binary classification problem.",
"Given a NL question and a column from the corresponding database, a classifier should predict whether the column is mentioned in the gold SQL query.",
"We hypothesize that representations from the DG-MAML parser will be more predictive of relevance than those of the baseline, and the probing classifier will detect this difference in the quality of the representations.",
"We first obtain the representations for NL questions and schemas from the parsers and keep them fixed.",
"The binary classifier is then trained based only on these representations.",
"For classifier training we use the same split as the Spider dataset, i.e., the classifier is evaluated on unseen databases.",
"Details of the classifier are provided in the Appendix.",
"The results are shown in Table 4.",
"The classifier trained on the parser with DG-MAML achieves better performance.",
"This confirms our hypothesis that using DG-MAML makes the parser have better encodings of NL questions and database schemas and that this is one of the mechanisms the parsing model uses to ensure generalization.",
"The task of zero-shot semantic parsing has been gaining momentum in recent years.",
"However, previous work has not proposed algorithms or objectives that explicitly promote domain generalization.",
"We rely on the meta-learning framework to encourage domain generalization.",
"Instead of learning from individual data points, DG-MAML learns from a set of virtual zero-shot parsing tasks.",
"By optimizing towards better target-domain performance in each simulated task, DG-MAML encourages the parser to generalize better to unseen domains.",
"We conduct experiments on two zero-shot text-to-SQL parsing datasets.",
"In both cases, using DG-MAML leads to a substantial boost in performance.",
"Furthermore, we show that the faster first-order approximation DG-FMAML can also help a parser achieve better domain generalization.",
"We thank Bo Pang, Tao Yu, Qingkai Min and Yuefeng Shi for their help with the evaluation.",
"We would like to thank the anonymous reviewers for their valuable comments, and Jackie Cheung for pointing out a typo in Eq 4 in the draft version.",
"We gratefully acknowledge the support of the European Research Council (Titov: ERC StG BroadSem 678254; Lapata: ERC CoG TransModal 681760), the Dutch National Science Foundation (NWO VIDI 639.022.518) and EU H2020 Research and Innovation Programme (GoURMET 825299)."
] | [
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Unlike widely used Named Entity Recognition (NER) data sets in generic domains, biomedical NER data sets often contain mentions consisting of discontinuous spans.",
"Conventional sequence tagging techniques encode Markov assumptions that are efficient but preclude recovery of these mentions.",
"We propose a simple, effective transition-based model with generic neural encoding for discontinuous NER.",
"Through extensive experiments on three biomedical data sets, we show that our model can effectively recognize discontinuous mentions without sacrificing the accuracy on continuous mentions.",
"Named Entity Recognition (NER) is a critical component of biomedical natural language processing applications.",
"In pharmacovigilance, it can be used to identify adverse drug events in consumer reviews in online medication forums, alerting medication developers, regulators and clinicians (Leaman et al., 2010; Sarker et al., 2015; Karimi et al., 2015b).",
"In clinical settings, NER can be used to extract and summarize key information from electronic medical records such as conditions hidden in unstructured doctors' notes (Feblowitz et al., 2011; Wang et al., 2018b).",
"These applications require identification of complex mentions not seen in generic domains (Dai, 2018).",
"Widely used sequence tagging techniques ( flat model ) encode two assumptions that do not always hold: (1) mentions do not nest or overlap, therefore each token can belong to at most one mention; and, (2) mentions comprise continuous sequences of tokens.",
"Nested entity recognition addresses violations of the first assumption (Lu and Roth, 2015; Katiyar and Cardie, 2018; Sohrab and Miwa, 2018; Ringland et al., 2019).",
"However, the violation of The left atrium is mildly dilated .",
"E1 E1 have much muscle pain and fatigue .",
"the second assumption is comparatively less studied and requires handling discontinuous mentions (see examples in Figure 1).",
"In contrast to continuous mentions which are often short spans of text, discontinuous mentions consist of components that are separated by intervals .",
"Recognizing discontinuous mentions is particularly challenging as exhaustive enumeration of possible mentions, including discontinuous and overlapping spans, is exponential in sentence length.",
"Existing approaches for discontinuous NER either suffer from high time complexity (McDonald et al., 2005) or ambiguity in translating intermediate representations into mentions (Tang et al., 2013a; Metke-Jimenez and Karimi, 2016; Muis and Lu, 2016).",
"In addition, current art uses traditional approaches that rely on manually designed features, which are tailored to recognize specific entity types.",
"Also, these features usually do not generalize well in different genres (Leaman et al., 2015).",
"Motivations The main motivation for recognizing discontinuous mentions is that they usually represent compositional concepts that differ from concepts represented by individual components.",
"For example, the mention left atrium dilated' in the first example of Figure 1 describes a disorder which has its own CUI (Concept Unique Identi-fier) in UMLS (Unified Medical Language System), whereas both left atrium' and dilated' also have their own CUIs.",
"We argue that, in downstream applications such as pharmacovigilance and summarization, recognizing these discontinuous mentions that refer to disorders or symptoms is more useful than recognizing separate components which may refer to body locations or general feelings.",
"Another important characteristic of discontinuous mentions is that they usually overlap .",
"That is, several mentions may share components that refer to the same body location (e.g., muscle' in muscle pain and fatigue' ), or the same feeling (e.g., Pain' in Pain in knee and foot' ).",
"Separating these overlapping mentions rather than identifying them as a single mention is important for downstream tasks, such as entity linking where the assumption is that the input mention refers to one entity (Shen et al., 2015).",
"Contributions We propose an end-to-end transition-based model with generic neural encoding that allows us to leverage specialized actions and attention mechanism to determine whether a span is the component of a discontinuous mention or not.",
"1 We evaluate our model on three biomedical data sets with a substantial number of discontinuous mentions and demonstrate that our model can effectively recognize discontinuous mentions without sacrificing the accuracy on continuous mentions.",
"Existing methods on discontinuous NER can be mainly categorized into two categories: token level approach, based on sequence tagging techniques, and sentence level approach, where a combination of mentions within a sentence is jointly predicted (Dai, 2018).",
"Token level approach Sequence tagging model takes a sequence of tokens as input and outputs a tag for each token, composed of a position indicator (e.g., BIO schema) and an entity type.",
"The vanilla BIO schema cannot effectively represent discontinuous, overlapping mentions, therefore, some studies overcome this limitation via expanding the BIO tag set (Tang et al., 2013a; Metke-Jimenez and Karimi, 2016; Dai et al., 2017; Tang et al., 2018).",
"In addition to BIO indicators, four new position indicators are introduced in (Metke-Jimenez and 1 Code available at GitHub: https://bit.ly/2XazEAO Karimi, 2016) to represent discontinuous mentions that may overlap: BH : B eginning of H ead, defined as the components shared by multiple mentions; IH : I ntermediate of H ead; BD : B eginning of D iscontinuous body, defined as the exclusive components of a discontinuous mention; and ID : I ntermediate of D iscontinuous body.",
"Sentence level approach Instead of predicting whether each token belongs to an entity mention and its role in the mention, sentence level approach predicts a combination of mentions within a sentence.",
"A hypergraph, proposed by Lu and Roth (2015) and extended in (Muis and Lu, 2016), can compactly represent discontinuous and overlapping mentions in one sentence.",
"A sub-hypergraph of the complete hypergraph can, therefore, be used to represent a combination of mentions in the sentence.",
"For the token at each position, there can be six different node types: A : mentions that start from the current token or a future token; E : mentions that start from the current token; T : mentions of a certain entity type that start from the current token; B : mentions that contain the current token; O : mentions that have an interval at the current token; X : mentions that end at the current token.",
"Using this representation, a single entity mention can be represented as a path from node A to node X , incorporating at least one node of type B .",
"Note that both token level and sentence level approaches predict first an intermediate representation of mentions (e.g., a sequence of tags in (Metke-Jimenez and Karimi, 2016) and a sub-hypergraph in (Muis and Lu, 2016)), which are then decoded into the final mentions.",
"During the final decoding stage, both models suffer from some level of ambiguity.",
"Taking the sequence tagging model using BIO variant schema as an example, even if the model can correctly predict the gold sequence of tags for the example sentence muscle pain and fatigue' (BH I O BD), it is still not clear whether the token muscle' forms a mention by itself, because the same sentence containing three mentions ( muscle' , muscle pain' and muscle fatigue' ) can be encoded using the same gold sequence of tags.",
"We refer to a survey by (Dai, 2018) for more discussions on these models, and (Muis and Lu, 2016) for a theoretical analysis of ambiguity of these models.",
"Similar to prior work, our proposed transition-based model uses an intermediate representation (i.e., a sequence of actions).",
"However, it does not suffer from this ambiguity issue.",
"That is, the output sequence of actions can always be unambiguously decoded into mention outputs.",
"The other two methods that focus on the discontinuous NER problem in literature are described in (McDonald et al., 2005; Wang and Lu, 2019).",
"McDonald et al. (2005) solve the NER task as a structured multi-label classification problem.",
"Instead of starting and ending indices, they represent each entity mention using the set of token positions that belong to the mention.",
"This representation is flexible, as it allows mentions consisting of discontinuous tokens and does not require mentions to exclude each other.",
"However, this method suffers from high time complexity.",
"Tang et al. (2018) compare this representation with BIO variant schema proposed in (Metke-Jimenez and Karimi, 2016), and found that they achieve competitive F 1 scores, although the latter method is more efficient.",
"A two-stage approach that first detects all components and then combines components into discontinuous mentions based on a classifier's decision was explored in recent work by Wang and Lu (2019).",
"Discontinuous NER vs. Nested NER Although discontinuous mentions may overlap, we discriminate this overlapping from the one in nested NER.",
"That is, if one mention is completely contained by the other, we call mentions involved nested entity mentions.",
"In contrast, overlapping in discontinuous NER is usually that two mentions overlap, but no one is completely contained by the other.",
"Most of existing nested NER models are built to tackle the complete containing structure (Finkel and Manning, 2009; Lu and Roth, 2015), and they cannot be directly used to identify overlapping mentions studied in this paper, nor mention the discontinuous mentions.",
"However, we note that there is a possible perspective to solve discontinuous NER task by adding fine-grained entity types into the schema.",
"Taking the second sentence in Figure 1 have much muscle pain and fatigue .",
"as an example, we can add two new entity types: Body Location' and 'General Feeling', and then annotate muscle pain and fatigue' as a Adverse drug event' mention, muscle' as a Body Location' mention, and pain' and fatigue' as General Feel-ing' mentions (Figure 2).",
"Then the discontinuous NER task can be converted into a Nested NER task.",
"Transition-based models, due to their high effi-ciency, are widely used for NLP tasks, such as parsing and entity recognition (Chen and Manning, 2014; Lample et al., 2016; Lou et al., 2017; Wang et al., 2018a).",
"The model we propose for discontinuous NER is based on the shift-reduce parser (Watanabe and Sumita, 2015; Lample et al., 2016) that employs a stack to store partially processed spans and a buffer to store unprocessed tokens.",
"The learning problem is then framed as: given the state of the parser, predict an action which is applied to change the state of the parser.",
"This process is repeated until the parser reaches the end state (i.e., the stack and buffer are both empty).",
"The main difference between our model and the ones in (Watanabe and Sumita, 2015; Lample et al., 2016) is the set of transition actions.",
"Watanabe and Sumita (2015) use SHIFT, REDUCE, UNARY, FINISH, and IDEA for the constituent parsing system.",
"Lample et al. (2016) use SHIFT, REDUCE, OUT for the flat NER system.",
"Inspired by these models, we design a set of actions specifically for recognizing discontinuous and overlapping structure.",
"There are in total six actions in our model: SHIFT moves the first token from the buffer to the stack; it implies this token is part of an entity mention.",
"OUT pops the first token of the buffer, indicating it does not belong to any mention.",
"COMPLETE pops the top span of the stack, outputting it as an entity mention.",
"If we are interested in multiple entity types, we can extend this action to COMPLETEy which labels the mention with entity type y .",
"REDUCE pops the top two spans s 0 and s 1 from the stack and concatenates them as a new span which is then pushed back to the stack.",
"LEFT-REDUCE is similar to the REDUCE action, except that the span s 1 is kept in the stack.",
"This action indicates the span s 1 is involved in multiple mentions.",
"In other words, several mentions share s 1 which could be a single token or several tokens.",
"Figure 3 shows an example about how the parser recognizes entity mentions from a sentence.",
"Note that, given one parser state, not all types of actions are valid.",
"For example, if the stack does not contain any span, only SHIFT and OUT actions are valid because all other actions involve popping spans from the stack.",
"We employ hard constraints that we only select the most likely action from valid actions.",
"Given a sequence of N tokens, we first run a bidirectional LSTM (Graves et al., 2013) to derive the contextual representation of each token.",
"Specifically, for the i -th token in the sequence, its representation can be denoted as: c i = (cid:104) LSTM ( t 0 , . . . , t i ); LSTM ( t i , . . . , t N 1 ) (cid:105) , where t i is the concatenation of the embeddings for the i -th token, its character level representation learned using a CNN network (Ma and Hovy, 2016).",
"Pretrained contextual word representations have shown its usefulness on improving various NLP tasks.",
"Here, we can also concatenate pretrained contextual word representations using ELMo (Peters et al., 2018) with c i , resulting in: c i = [ c i ; ELMo i ] , (1) where ELMo i is the output representation of pretrained ELMo models (frozen) for the i -th token.",
"These token representations c are directly used to represent tokens in the buffer.",
"We also explore a variant that uses the output of pretrained BERT (De-vlin et al., 2019) as token representations c , and fine-tune the BERT model.",
"However, this fine-tuning approach with BERT does not achieve as good performance as feature extraction approach with ELMo (Peters et al., 2019).",
"Following the work in (Dyer et al., 2015), we use Stack-LSTM to represent spans in the stack.",
"That is, if a token is moved from the buffer to the stack, its representation is learned using: s 0 = Stack-LSTM ( s D . . . s 1 ; c SHIFT ) , where D is the number of spans in the stack.",
"Once REDUCE related actions are applied, we use a multi-layer perceptron to learn the representation of the concatenated span.",
"For example, the REDUCE action takes the representation of the top two spans in the stack: s 0 and s 1 , and produces a new span representation: s = WT [ s 0 ; s 1 ] + b, where W and b denote the parameters for the composition function.",
"The new span representation s is pushed back to the stack to replace the original two spans: s 0 and s 1 .",
"We hypothesize that the interactions between spans in the stack and tokens in the buffer are important factors in recognizing discontinuous mentions.",
"Considering the example in Figure 3, a span in the stack (e.g., muscle' ) may need to combine with a future token in the buffer (e.g., fatigue' ).",
"To capture this interaction, we use multiplicative attention (Luong et al., 2015) to let the span in the stack s i learn which token in the buffer to attend, and thus a weighted sum of the representation of tokens in the buffer B : s ai = softmax ( s Ti W ai B ) B .",
"We use distinct W ai for s i separately.",
"Finally, we build the parser representation as the concatenation of the representation of top three spans from the stack ( s 0 , s 1 , s 2 ) and its attended representation ( s a0 , s a1 , s a2 ), as well as the representation of the previous action a , which is learned using a simple unidirectional LSTM.",
"If there are less than 3 spans in the stack or no previous action, we use randomly initialized vectors s empty or a empty to replace the corresponding vector.",
"This parser representation is used as input for the final softmax prediction layer to select the next action.",
"Although some text annotation tools, such as BRAT (Stenetorp et al., 2012), allow discontinuous annotations, corpora annotated with a large number of discontinuous mentions are still rare.",
"We use three data sets from the biomedical domain: CADEC (Karimi et al., 2015a), ShARe 13 (Prad-han et al., 2013) and ShARe 14 (Mowery et al., 2014).",
"Around 10% of mentions in these three data sets are discontinuous.",
"The descriptive statistics are listed in Table 1.",
"CADEC is sourced from AskaPatient 2 , a forum where patients can discuss their experiences with medications.",
"The entity types in CADEC include drug, Adverse Drug Event (ADE), disease and symptom.",
"We only use ADE annotations because only the ADEs involve discontinuous annotations.",
"This also allows us to compare our results directly against previously reported results (Metke-Jimenez and Karimi, 2016; Tang et al., 2018).",
"ShARe 13 and 14 focus on the identification of disorder mentions in clinical notes, including discharge summaries, electrocardiogram, echocardiogram, and radiology reports (Johnson et al., 2016).",
"A disorder mention is defined as any span of text which can be 2 https://www.askapatient.com/ CADEC ShARe 13 ShARe 14 Text type online posts clinical notes clinical notes Entity type ADE Disorder Disorder # Documents 1,250 298 433 # Tokens 121K 264K 494K # Sentences 7,597 18,767 34,618 # Mentions 6,318 11,161 19,131 # Disc.M 675 (10.6) 1,090 (9.7) 1,710 (8.9) Avg mention L. 2.7 1.8 1.7 Avg Disc.M L. 3.5 2.6 2.5 Avg interval L. 3.3 3.0 3.2 Discontinuous Mentions 2 components 650 (95.7) 1,026 (94.3) 1,574 (95.3) 3 components 27 ( 3.9) 62 ( 5.6) 76 ( 4.6) 4 components 2 ( 0.2) 0 ( 0.0) 0 ( 0.0) No overlap 82 (12.0) 582 (53.4) 820 (49.6) Overlap at left 351 (51.6) 376 (34.5) 616 (37.3) Overlap at right 152 (22.3) 102 ( 9.3) 170 (10.3) Multiple overlaps 94 (13.8) 28 ( 2.5) 44 ( 2.6) Continuous Mentions Overlap 326 ( 5.7) 157 ( 1.5) 228 ( 1.3) Table 1: The descriptive statistics of the data sets.",
"of SNOMED-CT (Cornet and de Keizer, 2008).",
"Although these three data sets share similar field (the subject matter of the content being discussed), the tenor (the participants in the discourse, their relationships to each other, and their purposes) of CADEC is very different from the ShARe data sets (Dai et al., 2019).",
"In general, laymen (i.e., in CADEC) tend to use idioms to describe their feelings, whereas professional practitioners (i.e., in ShARe) tend to use compact terms for efficient communications.",
"This also results in different features of discontinuous mentions between these data sets, which we will discuss further in 7.",
"Experimental Setup As CADEC does not have an official train-test split, we follow Metke-Jimenez and Karimi (2016) and randomly assign 70% of the posts as the training set, 15% as the development set, and the remaining posts as the test set.",
"3 The train-test splits of ShARe 13 and 14 are both from their corresponding shared task settings, except that we randomly select 10% of documents from each training set as the development set.",
"Micro 3 These splits can be downloaded from https://bit.ly/2XazEAO.",
"average strict match F 1 score is used to evaluate the effectiveness of the model.",
"The trained model which is most effective on the development set, measured using the F 1 score, is used to evaluate the test set.",
"We choose one flat NER model which is strong at recognizing continuous mentions, and two discontinuous NER models as our baseline models:",
"Flat model To train the flat model on our data sets, we use an off-the-shelf framework: Flair (Ak-bik et al., 2018), which achieves the state-of-the-art performance on CoNLL 03 data set.",
"Recall that the flat model cannot be directly applied to data sets containing discontinuous mentions.",
"Following the practice in (Stanovsky et al., 2017), we replace the discontinuous mention with the shortest span that fully covers it, and merge overlapping mentions into a single mention that covers both.",
"Note that, different from (Stanovsky et al., 2017), we apply these changes only on the training set, but not on the development set and the test set.",
"BIO extension model The original implementation in (Metke-Jimenez and Karimi, 2016) used a CRF model with manually designed features.",
"We report their results on CADEC in Table 2 and reimplement a BiLSTM-CRF-ELMo model using their tag schema (denoted as BIO Extension' in Table 2).",
"Graph-based model The original paper of (Muis and Lu, 2016) only reported the evaluation results on sentences which contain at least one discontinuous mention.",
"We use their implementation to train the model and report evaluation results on the whole test set (denoted as Graph' in Table 2).",
"We argue that it is important to see how a discontinuous NER model works not only on the discontinuous mentions but also on all the mentions, especially since, in real data sets, the ratio of discontinuous mentions cannot be made a priori.",
"We do not choose the model proposed in (Wang and Lu, 2019) as the baseline model, because it is based on a strong assumption about the ratio of discontinuous mentions.",
"Wang and Lu (2019) train and evaluate their model on sentences that contain at least one discontinuous mention.",
"Our early experiments show that the effectiveness of their model strongly depends on this assumption.",
"In contrast, we train and evaluate our model in a more practical setting where the number of continuous mentions is much larger than the one of discontinuous mentions.",
"When evaluated on the whole test set, our model outperforms three baseline models, as well as over previous reported results in the literature, in terms of recall and F 1 scores (Table 2).",
"The graph-based model achieves highest precision, but with substantially lower recall, therefore obtaining lowest F 1 scores.",
"In contrast, our model improves recall over flat and BIO extension models as well as previously reported results, without sacrificing precision.",
"This results in more balanced precision and recall.",
"Improved recall is especially encouraging for our motivating pharmacovigilance and medical record summarization applications, where recall is at least as important as precision.",
"Effectiveness on recognizing discontinuous mentions Recall that only 10% of mentions in these three data sets are discontinuous.",
"To evaluate the effectiveness of our proposed model on recognizing discontinuous mentions, we follow the evaluation approach in (Muis and Lu, 2016) where we construct a subset of test set where only sentences with at least one discontinuous mention are included (Left part of Table 3).",
"We also report the evaluation results when only discontinuous mentions are considered (Right part of Table 3).",
"Note that sentences in the former setting usually contain continuous mentions as well, including those involved in overlapping structure (e.g., muscle pain' in the sentence muscle pain and fatigue').",
"Therefore, the flat model, which cannot predict any discontinuous mentions, still achieves 38% F 1 on average when evaluated on these sentences with at least one discontinuous mention, but 0% F 1 when evaluated on discontinuous mentions only.",
"Our model again achieves the highest F 1 and recall in all three data sets under both settings.",
"The comparison between these two evaluation results also shows the necessity of comprehensive evaluation settings.",
"The BIO E. model outperforms the graph-based model in terms of F 1 score on CADEC, when evaluated on sentences with discontinuous mentions.",
"However, it achieves only 1.8 F 1 when evaluated on discontinuous mentions only.",
"The main reason is that most of discontinuous mentions in CADEC are involved in overlapping CADEC ShARe 13 ShARe 14 Model P R F P R F P R F (Metke-Jimenez and Karimi, 2016) 64.4 56.5 60.2 (Tang et al., 2018) 67.8 64.9 66.3 (Tang et al., 2013b) 80.0 70.6 75.0 Flat 65.3 58.5 61.8 78.5 66.6 72.0 76.2 76.7 76.5 BIO Extension 68.7 66.1 67.4 77.0 72.9 74.9 74.9 78.5 76.6 Graph 72.1 48.4 58.0 83.9 60.4 70.3 79.1 70.7 74.7 Ours 68.9 69.0 69.0 80.5 75.0 77.7 78.1 81.2 79.6 Table 2: Evaluation results on the whole test set in terms of precision, recall and F 1 score.",
"structure (88%, cf. Table 1), and the BIO E. model is better than the graph-based model at recognizing these continuous mentions.",
"On ShARe 13 and 14, where the portion of discontinuous mentions involved in overlapping is much less than on CADEC, the graph-based model clearly outperforms BIO E. model in both evaluation settings.",
"We start our analysis from characterizing discontinuous mentions from the three data sets.",
"Then we measure the behaviors of our model and two discontinuous NER models on the development sets based on characteristics identified and attempt to draw conclusions from these measurements.",
"Recall that discontinuous mentions usually represent compositional concepts that consist of multiple components.",
"Therefore, discontinuous mentions are usually longer than continuous mentions (Ta-ble 1).",
"In addition, intervals between components make the total length of span involved even longer.",
"Previous work shows that flat NER performance degrades when applied on long mentions (Augenstein et al., 2017; Xu et al., 2017).",
"Another characteristic of discontinuous mentions is that they usually overlap (cf. 1).",
"From this perspective, we can categorize discontinuous mentions into four categories: No overlap: in such cases, the discontinuous mention can be intervened by severity indicators (e.g., is mildly' in sentence left atrium is mildly dilated' ), preposition (e.g., on my' in sentence ...rough on my stomach...' ) and so on.",
"This category accounts for half of discontinuous mentions in the ShARe data sets but only 12% in CADEC (Table 1).",
"Left overlap: the discontinuous mention shares one component with other mentions, and the shared component is at the beginning of the discontinuous mention.",
"This is usually accompanied with coordination structure (e.g., the shared component muscle' in muscle pain and fatigue' ).",
"Conjunctions (e.g., and' , or' ) are clear indicators of the coordination structure.",
"However, clinical notes are usually written by practitioners under time pressure.",
"They often use commas or slashes rather than conjunctions.",
"This category accounts for more than half of discontinuous mentions in CADEC and one third in ShARe.",
"Right overlap: similar to left overlap, although the shared component is at the end.",
"For ex-1 (280) 2 (307) 3 (136) 4 (69) >=5 (106) Mention length 20 30 40 50 60 70 80 90 R e c a ll BIO E. GraphOurs",
"ample, hip/leg/foot pain' contains three mentions that share pain' .",
"Multi-overlap: the discontinuous mention shares multiple components with the others, which usually forms crossing compositions .",
"For example, the sentence Joint and Muscle Pain / Stiffness' contains four mentions: Joint Pain' , Joint Stiffness' , Muscle Stiffness' and Muscle Pain' , where each discontinuous mention share two components with the others.",
"Previous study shows that the intervals between components can be problematic for coordination boundary detection (Ficler and Goldberg, 2016).",
"Conversely, we want to observe whether the overlapping structure may help or hinder discontinuous entity recognition.",
"We categorize discontinuous mentions into different subsets, described in 7.1, and measure the effectiveness of different discontinuous NER models on each category.",
"From Table 4, we find that our model achieves better results on discontinuous mentions belonging to No overlap' category on ShARe 13 and 14, and Left overlap' category on CADEC and ShARe 14.",
"Note that No overlap' category accounts for half of discontinuous mentions in ShARe 13 and 14, whereas Left overlap' accounts for half in CADEC (Table 1).",
"Graph-based model achieves better results on Right overlap' category.",
"On the Multi-overlap' category, no models is effective, which emphasizes the challenges of dealing with this syntactic phenomena.",
"We note, however, the portion of discontinuous mentions belonging to this category is very small in all three data sets.",
"Although our model achieves better results on No overlap' category on ShARe 13 and 14, it does not predict correctly any discontinuous mention belonging to this category on CADEC.",
"The ineffectiveness of our model, as well as other discontinuous NER models, on CADEC No overlap' category can be attributed to two reasons: 1) the number of discontinuous mentions belonging to this category in CADEC is small (around 12%), rending the learning process more difficult.",
"2) the gold annotations belonging to this category are inconsistent from a linguistic perspective.",
"For example, severity indicators are annotated as the interval of the discontinuous mention sometimes, but not often.",
"Note that this may be reasonable from a medical perspective, as some symptoms are roughly grouped together no matter their severity, whereas some symptoms are linked to different concepts based on their severity.",
"We conduct experiments to measure the ability of different models on recalling mentions of different lengths, and to observe the impact of interval lengths.",
"We found that the recall of all models decreases with the increase of mention length in general (Figure 4 (a",
"c)), which is similar to previous observations in the literature on flat men-CADEC ShARe 13 ShARe 14 Model # F # F # F No OBIO E. 9 0.0 41 7.5 39 0.0 Graph 0.0 32.1 45.2 Ours 0.0 36.1 57.1 Left OBIO E. 54 6.0 11 25.0 30 15.7 Graph 9.2 45.5 37.7 Ours 28.6 33.3 49.2 Right OBIO E. 16 0.0 19 0.0 5 0.0 Graph 45.2 21.4 0.0 Ours 29.3 13.3 0.0 Multi OBIO E. 15 0.0 0 6 0.0 Graph 0.0 0.0 Ours 0.0 0.0 Table 4: Evaluation results on different categories of discontinuous mentions.",
"tions.",
"However, the impact of interval length is not straightforward.",
"Mentions with very short interval lengths are as difficult as those with very long interval lengths to be recognized (Figure 4 (d",
"f)).",
"On CADEC, discontinuous mentions with interval length of 2 are easiest to be recognized (Figure 4",
"(d)), whereas those with interval length of 3 are easiest on ShARe 13 and 14.",
"We hypothesize this also relates to annotation inconsistency, because very short intervals may be overlooked by annotators.",
"In terms of model comparison, our model achieves highest recall in most settings.",
"This demonstrates our model is effective to recognize both continuous and discontinuous mentions with various lengths.",
"In contrast, the BIO E. model is only strong at recalling continuous mentions (out-performing the graph-based model), but fails on discontinuous mentions (interval lengths > 0).",
"We find that previous models often fail to identify discontinuous mentions that involve long and overlapping spans.",
"For example, the sentence Severe joint pain in the shoulders and knees.' contains two mentions: Severe joint pain in the shoulders' and Severe joint pain in the knees'.",
"Graph-based model does not identify any mention from this sentence, resulting in a low recall.",
"The BIO extension model predicts most of these tags (8 out of 9) correctly, but fails to decode into correct mentions (predict Severe joint pain in the', resulting in a false positive, while it misses Severe joint pain in the shoulders').",
"In contrast, our model correctly identifies both of these two mentions.",
"No model can fully recognize mentions which form crossing compositions.",
"For example, the sentence Joint and Muscle Pain / Stiffness' contains four mentions: Joint Pain', Joint Stiffness', Muscle Stiffness' and Muscle Pain', all of which share multiple components with the others.",
"Our model correctly predicts Joint Pain' and Muscle Pain', but it mistakenly predicts Stiffness' itself as a mention.",
"We propose a simple, effective transition-based model that can recognize discontinuous mentions without sacrificing the accuracy on continuous mentions.",
"We evaluate our model on three biomedical data sets with a substantial number of discontinuous mentions.",
"Comparing against two existing discontinuous NER models, our model is more effective, especially in terms of recall.",
"We would like to thank Danielle Mowery for helping us to obtain the ShARe data sets.",
"We also thank anonymous reviewers for their insightful comments.",
"Xiang Dai is supported by Sydney University's Engineering and Information Technologies Research Scholarship as well as CSIRO's Data61 top up scholarship."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"other",
"other"
] |
[
"In recent years, machine learning models have rapidly become better at generating clinical consultation notes; yet, there is little work on how to properly evaluate the generated consultation notes to understand the impact they may have on both the clinician using them and the patient's clinical safety.",
"To address this we present an extensive human evaluation study of consultation notes where 5 clinicians",
"(i) listen to 57 mock consultations,",
"(ii) write their own notes,",
"(iii) post-edit a number of automatically generated notes, and",
"(iv) extract all the errors, both quantitative and qualitative.",
"We then carry out a correlation study with 18 automatic quality metrics and the human judgements.",
"We find that a simple, character-based Levenshtein distance metric performs on par if not better than common model-based metrics like BertScore.",
"All our findings and annotations are open-sourced.",
"Modern Electronic Health Records (EHR) systems require clinicians to keep a thorough record of every patient interaction and management decision.",
"While this creates valuable data that may lead to better health decisions, it also significantly increases the burden on the clinicians, with studies showing this is a major contributor to burnout (Arndt et al., 2017).",
"In most primary healthcare practices, the universal record of a clinician-patient interaction is the SOAP (Subjective, Objective, Assessment, Plan) note, which captures the patient's history, and the clinician's observations, diagnosis, and management plan (Pearce et al., 2016).",
"At the end of a consultation, the clinician is required to write up a SOAP note of the encounter.",
"With the exception of the clinician's internal observations on how the patient looks and feels, most of the SOAP note is verbalised and could be automatically constructed from the transcript of the consultation.",
"A number of recent studies (Enarvi et al., 2020; Joshi et al., 2020; Zhang et al., 2021a) propose using summarisation systems to automatically generate consultation notes from the verbatim transcript of the consultationa task henceforth referred to as Note Generation.",
"Yet, there is very limited work on how to evaluate a Note Generation system so that it may be safely used in the clinical setting.",
"Where evaluations are present, they are most often carried out with automatic metrics; while quick and cheap, these metrics were devised for general purpose summarisation or machine translation, and it is unclear whether they work just as well on this new task.",
"In the field of automatic summarisation and Natural Language Generation (NLG) in general, human evaluation is the gold standard protocol.",
"Even in cases where the cost of using human evaluation is prohibitive, it is essential to establish the ground truth scores which automatic metrics should aim for.",
"Our contributions are:",
"(i) a large-scale human evaluation performed by 5 clinicians on a set of 285 consultation notes,",
"(ii) a thorough analysis of the clinician annotations, and",
"(iii) a correlation study with 18 automatic metrics, discussing limitations and identifying the most suitable metrics to this task.",
"We release all annotations, human judgements, and metric scores.",
"1 2 Related Work Note Generation has been in the focus of the aca-demic community with both extractive methods (Moen et al., 2016b; Alsentzer and Kim, 2018), and with abstractive neural methods (Zhang et al., 2018; Liu et al., 2019; MacAvaney et al., 2019; Zhang et al., 2020; Enarvi et al., 2020; Joshi et al., 2020; Krishna et al., 2021; Chintagunta et al., 2021; Yim and Yetisgen-Yildiz, 2021; Moramarco et al., 2021; Zhang et al., 2021a).",
"Whether these studies 1 https://github.com/babylonhealth/primock57 5739 Transcript Note Clinician Hello.",
"discuss the generation of radiology reports, patient-nurse summaries, discharge summaries, or SOAP notes, they all deal with long passages of text in the medical domain.",
"This is a critical distinction from other application contexts (e.g. news summarisa-tion): here, commonly used and well-studied evaluation criteria such as fluency', relevance', and adequacy' are superseded by other criteria, such as omissions of important negatives', mislead-ing information', contradictions', etc.",
"In addition, common summarisation metrics such as ROUGE (Lin, 2004) or BertScore (Zhang et al., 2019) measure the standalone quality of outputs and are not typically evaluated against more extrinsic criteria, such as post-editing times.",
"Of the 18 studies on the subject that we could identify, 13 present an automatic evaluation (typically based on ROUGE and sometimes on medical entity linking) and 12 carry out a small-scale intrinsic human evaluation.",
"In particular, Moen et al. (2016a) employ three domain experts to review 40 generated notes with Likert scales along 30 criteria (including Long-term di-agnosis', Reason for admission', assessment'), but report that the subjects found the 30 item scale too difficult and detailed to assess.",
"MacAvaney et al. (2019) use one domain expert to review 100 notes and report Likert scale values for Readabil-ity', Accuracy', and Completeness'.",
"Moramarco et al. (2021) employ three clinicians and compare the times to post-edit generated notes with those of writing them from scratch, reporting that, while faster, post-editing may be more cognitively intensive than writing.",
"Outside of the medical domain, our work is comparable to Fabbri et al. (2021), who run an automatic metrics correlation study for news article summaries for the CNN/DailyMail dataset (Nallap-ati et al., 2016).",
"They also release code 2 for evaluating text with a suite of common metrics, some of which we include in our own list of metrics to evaluate.",
"Our evaluation study is based on a dataset of 57 pairs of mock consultation transcripts and summary notes (Papadopoulos Korfiatis et al., 2022).",
"3 The data was produced by enacting consultations using clinical case cards.",
"The clinicians that conducted the mock consultations also wrote the corresponding SOAP note.",
"The consultations span common topics within primary healthcare and are about 10 minutes long.",
"To mimic a live clinical environment, the audio 2 https://github.com/Yale-LILY/SummEval 3 The dataset is available at: https://github.com/ babylonhealth/primock57 5740 Figure 1: Diagram of the dataset creation and the four tasks involved in the human evaluation.",
"of the consultations was transcribed with Google Speech-to-text engine 4 .",
"These transcripts form the input to the Note Generation models.",
"The aim is to generate the Subjective part of a SOAP note.",
"Table 1 shows an example transcript and respective note.",
"Figure 1 describes the creation of the dataset and how the data feeds into the human evaluation tasks described below.",
"In a fashion similar to Chintagunta et al. (2021); Moramarco et al. (2021); Zhang et al. (2021a), we fine-tune 10 neural summarisation models based on BART (Lewis et al., 2020) on a proprietary dataset of 130,000 real consultation notes and transcripts.",
"In accordance with our evaluation dataset, the training set consists of automatic Google Speech-to-text transcripts as inputs and the Subjective part of the corresponding notes as outputs.",
"The base models are large BART architectures pretrained on the CNN/Dailymail dataset 5 .",
"Since our focus is on evaluation, the aim was to obtain models which would produce different outputs to cover a wider range of errors.",
"The differences between the models included: fine-tuning on different sized datasets; using pre-processing techniques such as filtering the transcripts for relevant sentences; and using post-processing techniques such as filtering the generated notes for irrelevant sentences.",
"4 https://cloud.google.com/speech-to-text 5 https://huggingface.co/facebook/bart-large-cnn 4 Human Evaluation Setup Under the supervision of one of the authors (a clinician expert in AI development henceforth referred to as the Lead Clinician) we design the following evaluation tasks: 1. Listen to the mock consultation audio and take notes ( eval_notes ).",
"These eval_notes appear on the evaluator screen throughout to help reduce the cognitive load of remembering what was discussed in the consultation.",
"2. Relying on the eval_notes and the consultation audio, read 5 different notes and post-edit each one of them.",
"Post-editing consists of correcting an imperfect note to produce a factually accurate and relevant note (Sripada et al., 2005).",
"It mimics how a synthetic note could be used in clinical practice while also bootstrapping the error identification (Mora-marco et al., 2021).",
"For this purpose, the evaluation platform includes a track-changes interface, which highlights insertions and deletions (Figure 2), and records the time taken to post-edit.",
"3. For each note, classify the errors into two categories: incorrect statements' and omissions', by copying the spans of text from the post-editing interface and pasting them in the appropriate table (as in Figure 5741 Figure 2: Screenshot of the post-editing task where the evaluator is correcting a note with the track-changes interface.",
"3).",
"We define incorrect statements' as sentences in the generated notes which contain one or more factual errors (compared to the consultation audio).",
"Conversely, omissions' are medical facts which should be recorded in a consultation note and were omitted by the model.",
"Examples and edge cases (which were given to the evaluators for training) can be found in the Appendix, Figure A.4.",
"Each error is also tagged as critical' if the information contained has essential clinical importance.",
"Specifically, if the error would lead to medico-legal liability.",
"4. Report any qualitative feedback (e.g. regarding order of statements, repetition) in the Other issues' box.",
"Figure 1 (bottom half) shows a diagram of the human evaluation workflow.",
"The subjects of the study were 5 regularly practising clinicians (GPs) with a minimum of 3 years experience.",
"As part of our ethical consideration, all clinicians were paid the UK standard GP working rate and were free to cease participation at any time if they wished.",
"For diversity and inclusion, 2 male clinicians and 3 female clinicians were enlisted from a range of ethnic backgrounds.",
"Following the tasks described above, each clinician evaluated the entire dataset of 57 mock consultations.",
"Each consultation included 5 notes to evaluate, 4 of which were sampled from our 10 models and 1 was written by the consulting doctor ( human_note ).",
"We shuffled these for every consultation andto avoid biasesdid not specify that one of the notes was not synthetic.",
"The evaluation study took circa 30 working hours per evaluator to complete over a period of 8 weeks.",
"Before commencing, each evaluator went through a training and practice process conducted by the Lead Clinician, who explained the evaluation interface and guided them through the annotation of a practice note.",
"A copy of the evaluator instructions can be found in Appendix A. Throughout the study, the authors and the Lead Clinician held weekly sessions with each evaluator where we shadowed the evaluation tasks through screen sharing.",
"This helped us understand the difficulties in performing the tasks while ensuring the evaluators followed the guidelines set out for them.",
"The result of the human evaluation consists of 285 evaluator notes (57 consultations x 5 evaluators), 1,425 post-edited notes (285 x 5 notes per con-sultation), post-editing times, count and spans of incorrect statements, count and spans of omissions, whether they are critical, and qualitative comments.",
"When compared with more common evaluation approaches such as Likert scales and ranking methods, we believe our set-up provides a more granular and more interpretable set of judgements, albeit at the price of lowering the inter-annotator agreement.",
"To compensate for this, the 5 evaluators annotate the same 57 tasks (Sheng et al., 2008) and the scores Criterion Human Generated Post-edit times 96.5s 136.4s Number of Incorrect 1.3 3.9 Number of Omissions 3.9 6.6 Note length 16.9 21.5 Table 3: Aggregated judgements for human notes and generated notes.",
"6).",
"As shown in Table 2, we compute inter-annotator agreement on the post-editing times, incorrect statements, and omissions.",
"The absolute post-editing times are converted to a list of rankings for each evaluator, and agreement is computed with Krippendorff's alpha (Krippendorff, 2018) with ordi-nal' level of measurement.",
"This ensures only the ranking of each note is captured in the agreement and not the editing speed of each evaluator.",
"For example, where evaluator 1 takes 60 seconds and 120 seconds to post-edit two given notes and evaluator 2 takes 180 seconds and 240 seconds respectively, their agreement would be perfect because they both agreed that note 1 is quicker to edit than note 2. Conversely, for incorrect statements and omissions we calculate interval' Krippendorff's Alpha on the counts of errors identified by the evaluators.",
"As the counts don't ensure that two evaluators have selected the same statements, we also compute word overlap F1 score as suggested by Popovic and Belz (2021).",
"As shown in Table 2, the agreements for times and incorrect statements are not very strong (Krippendorff (2018) indicate that 0 .",
"667 is the lowest conceivable limit).",
"We investigate the source of disagreement and attribute it to two main factors:",
"(i) human error due to the difficulty inherent in the task, and",
"(ii) stylistic differences in note writing.",
"Examples of human error can be found in subsection 5.2.",
"As for stylistic differences, these are especially evident in the Omissions category, where some clinicians are thorough in their note taking and others only document the most important facts.",
"See Appendix B for more details on pairwise agreement.",
"To compare the accuracy of the models against human-written notes, we average all the judgements for our criteria (post-edit times, incorrect statements, and omissions), aggregate by the gener-5743",
"ated notes and the human_notes respectively, and report the results in Table 3. As expected, the hu-man_notes performed better for all criteria; in particular, they contain fewer omissions while being on average 4.6 sentences shorter.",
"However, the evaluators found imperfections in human notes too: it takes over 1.5 minutes on average to read and post-edit a human_note , and it contains over 1 incorrect statement and almost 4 omissions on average.",
"While the omissions can be reconciled as stylistic differences among evaluators, the incorrect statements are potentially more impactful.",
"To investigate, we select two human notes and ask the Lead Clinician to post-edit them, comparing the results with those of the evaluators.",
"In the first case, the Lead Clinician agrees with the evaluators in that the human note contains the following two incorrect statements: Inc. statement Correction Also vomiting mainly bilous Also vomiting Wife and children also unwell with vomiting, but no diarrhea.",
"Upon inspecting the consultation recording, the Lead Clinician found that the word bilious' was not stated by the patient.",
"However, the consulting clinician may have used this term due to a personal habitual documentation style (as clinically, vomit with no red flags can conventionally be referred to as bilious).",
"The words Wife and children also unwell with vomiting, but no diarrhea' were not stated by the patient.",
"Instead, the patient made a tangential statement summarised here: One child had some vomiting but no other symptoms in wife and other child.' Therefore, it is inferred that this clinician likely made a normal human error due to excessive patient detail (non-critical).",
"In the second case, the Lead Clinician found no issues with the human note.",
"Upon inspecting the corrections from the evaluators, he concluded that what they selected as incorrect statements were medical conditions inferred by the consulting clinician yet not specifically stated by the patient.",
"We highlight this to show that it is unclear whether the task has a single ground truth, as even human experts don't completely agree; well thought-out evaluation tasks can mitigate this and produce one or more good ground truth approximations.",
"Detailed examples can be found in Appendix C. Criterion 1 Criterion 2 Pears.",
"To understand the interdependence between our criteria, we compute Pearson's correlation (Freedman et al., 2007) and Spearman's rank correlation (Zar, 2005) coefficients between each pair.",
"Table 4 shows a moderately strong correlation between the time it takes to post-edit a note and the number of incorrect statements it contains.",
"The correlation between post-edit times and omissions is stronger, which could be explained by the fact that it takes longer to type an omitted statement than to delete or edit an incorrect one.",
"Finally, the correlation between post-edit times and incor-rect+omissions is strong, which suggests that post-edit times is a function of the number of edits and that one of these criteria could be a proxy for the other.",
"We also compute the correlation between each criterion and the length of the generated note.",
"These numbers can be used as a benchmark for automatic metric correlation; for example, the 0.413 Spearman's correlation between post-edit times and note length indicates that any automatic metric needs to surpass this value in order to be more useful than simply counting the number of sentences in the note.",
"As introduced in Section 4, the evaluators provide qualitative feedback about the generated notes in the Other Issues' field.",
"When analysing these comments, a number of repeated patterns emerged, highlighting common pitfalls in the generated notes.",
"Based on these we defined a small taxonomy (Table 5; issues in the human_notes are excluded), providing examples and occurrences of each issue type.",
"Aside from incorrect statements and omissions, the most significant issues revolve around repetition, 5744 Issue Examples Occ.",
"disjointed notes, and contradiction.",
"Upon investigating, we believe that all three are related to the tendencies of the models to generate the consultation note following the chronological order of the transcript.",
"While that is an intuitive behaviour, consultations are seldom carried out in the order of SOAP note sections (Subjective, Objective, Assessment, Plan), with the patient providing relevant information whenever they can, sometimes after the clinician has discussed assessment and plan.",
"Borrowing from the field of Automatic Summarisation, most studies on Note Generation rely on ROUGE and fact-extraction based metrics to evaluate the generated notes (Section 2 for more details).",
"While some studies carry out a small human evaluation, there is little effort to investigate whether the scores from ROUGE or the other metrics employed correlate well with the human judgements, especially extrinsic criteria such as post-edit times.",
"However, scores from these metrics are featured on leaderboards 6 for summarisation tasks, driving future research.",
"To address this, we carry out a correlation study of automatic metrics for the task of Note Generation.",
"A total of 18 automatic metrics are tested against statistics produced by the human judgements of our criteria: post-edit times, number of incorrect statements, and number of omissions.",
"Following the taxonomies reported by Celikyilmaz et al. (2020) and Sai et al. (2020), the metrics considered can be loosely grouped in: Text overlap metrics.",
"These are based on string matching, whether character based, word based, or n-gram based.",
"Some use stemming, synonyms, or paraphrases.",
"They include: ROUGE (Lin, 2004), CHRF (Popovic, 2015), METEOR (Lavie and Agarwal, 2007), 6 https://nlpprogress.com/english/summarization.html 5745 Criterion: Post-edit times Inc+Omi Incorrect Omissions Reference: human edited eval avg max avg max avg max avg max ROUGE-1-F1 * 0.334 0.627 0.160 0.443 0.550 0.580 0.704 0.378 0.505 0.561 0.651 ROUGE-2-F1 * 0.384 0.653 0.166 0.551 0.570 0.694 0.731 0.501 0.557 0.641 0.654 ROUGE-3-F1 * 0.366 0.645 0.117 0.576 0.565 0.734 0.731 0.555 0.568 0.663 0.646 ROUGE-4-F1 * 0.342 0.632 0.076 0.575 0.557 0.745 0.726 0.581 0.573 0.661 0.636 ROUGE-L-Pr * 0.348 0.471 0.169 0.366 0.427 0.500 0.613 0.607 0.745 0.306 0.375 ROUGE-L-Re * 0.409 0.614 0.300 0.520 0.551 0.640 0.680 0.374 0.416 0.660 0.688 ROUGE-L-F1 * 0.384 0.646 0.285 0.538 0.564 0.661 0.719 0.479 0.534 0.610 0.655 CHRF * 0.341 0.460 -0.075 0.463 0.438 0.581 0.560 0.504 0.484 0.483 0.462 METEOR * 0.415 0.667 0.203 0.529 0.581 0.674 0.713 0.429 0.463 0.668 0.699 BLEU * 0.382 0.642 0.098 0.557 0.565 0.698 0.702 0.447 0.453 0.685 0.686 Levenshtein dist. 0.547 0.780 0.453 0.600 0.654 0.650 0.760 0.566 0.555 0.531 0.697 WER 0.239 0.629 0.059 0.326 0.550 0.412 0.704 0.499 0.535 0.252 0.631 MER 0.392 0.635 0.156 0.565 0.557 0.703 0.706 0.500 0.513 0.659 0.651 WIL 0.394 0.649 0.117 0.590 0.566 0.747 0.723 0.578 0.566 0.668 0.638 ROUGE-WE * 0.402 0.624 0.165 0.496 0.549 0.621 0.712 0.415 0.524 0.595 0.650 SkipThoughts * 0.298 0.403 -0.067 0.229 0.375 0.366 0.504 0.338 0.407 0.288 0.439 Embedding Avg * 0.266 0.375 -0.209 0.064 0.412 0.223 0.572 0.147 0.392 0.211 0.543 VectorExtrema * 0.409 0.553 0.127 0.424 0.500 0.550 0.648 0.367 0.468 0.531 0.600 GreedyMatching * 0.308 0.577 -0.041 0.295 0.520 0.436 0.670 0.281 0.479 0.428 0.624 USE * 0.339 0.522 0.201 0.366 0.476 0.474 0.637 0.327 0.452 0.454 0.598 WMD 0.354 0.594 0.154 0.421 0.529 0.561 0.670 0.319 0.414 0.577 0.670 BertScore * 0.497 0.688 0.340 0.571 0.590 0.710 0.744 0.530 0.552 0.645 0.676 MoverScore * 0.360 0.640 0.246 0.570 0.559 0.687 0.688 0.448 0.467 0.669 0.657 Stanza+Snomed * 0.334 0.508 0.118 0.354 0.460 0.528 0.643 0.447 0.533 0.449 0.553 Table 6: Spearman's correlation coefficients for each metric and each criterion.",
"and BLEU (Papineni et al., 2002).",
"Edit distance metrics.",
"These count the number of character or word level transformations required to convert the system output into the reference text.",
"They include: Levenshtein edit distance (Levenshtein et al., 1966), WER (Su et al., 1992), MER and WIL (Morris et al., 2004).",
"Embedding metrics, including word-level, byte-level, and sentence-level embeddings.",
"These metrics encode units of text with pretrained models and compute cosine similarity between them.",
"They include: ROUGE-WE (Morris et al., 2004), SkipThoughts, Embed-dingAverage, VectorExtrema (Forgues et al., 2014), GreedyMatching (Sharma et al., 2017), USE 7 (Cer et al., 2018), WMD (Kusner et al., 7 Cosine similarity between the reference and the hypothesis embeddings. Embeddings computed with Universal Sentence Encoder. 2015), BertScore (Zhang et al., 2019), and MoverScore (Zhao et al., 2019).",
"Fact-extraction.",
"The Stanza+Snomed metric extracts medical concept spans with Stanza (Zhang et al., 2021b), then uses similarity measures to map them to entities in the SNOMED CT clinical ontology (Spackman et al., 1997).",
"The metric computes F1 score between reference and hypothesis over the set of extracted entities.",
"For more details on each metric please refer to their respective papers.",
"All these metrics attempt to measure the accuracy of the generated text by comparing it against a reference text.",
"Our human evaluation study produces three distinct human-curated notes which can be used as reference: the human_note is the original note, written by the consulting clinician (and also one of the hypotheses), the eval_note is the note written by the evaluators after listening to the consultation audio, and 5746 the edited_note is the generated note after being post-edited by the evaluators.",
"Table 6 reports the correlation coefficients.",
"When correlating against post-edit times, we consider each reference text ( human_note , edited_note , eval_note ) separately, then take the average and the maximum of the metric scores for each reference.",
"For count of incorrect statements, omissions, and incorrect+omissions we only report the average and the maximum scores, taking all three references into account as commonly done by the metrics that support multiple references (e.g. BLEU, ROUGE, METEOR, BertScore).",
"We compute Pearson's and Spearman's coefficients and, upon finding similar patterns, only report Spearman's coefficients in Table 6.",
"The Pearson's coefficients can be found in Table A.8 in the Appendix.",
"As shown in Table 6, all metrics display a strong bias towards the choice of reference.",
"In particular, the correlation scores with the edited_note as reference are much higher than those of either hu-man_note or eval_note .",
"As the edited_note is a transformation of the generated note (refer to Figure 2), these high correlations show how reliant all the metrics are on the surface form of the text.",
"The significant difference between taking human_note and eval_note as reference can be traced to two main factors:",
"(i) the human_note is unique per consultation so the human judgements are averaged across evaluators (reducing noise and collapsing disagreement), and",
"(ii) the eval_note was not written to replace a SOAP note but is rather a list of the most salient points in the consultation, and sometimes contains more information than would typically be detailed in a SOAP note.",
"The top three metrics in most scenarios are Levenshtein distance, BertScore, and METEOR.",
"While METEOR and BertScore are established metrics in NLG evaluation, Levenshtein distance is not typically used as a metric in long-form text evaluation.",
"From a semantic point of view, edit distance has the least amount of knowledge and should be very brittle when comparing text that is meaningfully similar but lexically very different.",
"Yet Levenshtein distance has the highest correlation even when the reference is the eval_note , which is syntactically very different from the generated note; whereas even contextual metrics like BertScore perform more poorly.",
"A possible explanation for this behaviour may be that our post-editing times and count of incorrect statements/omissionsunlike Likert scales scoresmeasure the amount of work required to convert a synthetic note into a factually-correct and relevant note, just as Levenshtein distance measures the character-level distance between the synthetic note and the reference.",
"We notice that all the metrics correlate better with counts of incorrect+omissions than with post-edit times, despite the two criteria being strongly correlated with each other (0.829 Spearman's correlation, see Table 4).",
"We believe this is due to post-editing times containing more noise and capturing more of the stylistic differences between evaluators than the number of errors does.",
"We conducted a human evaluation study for the task of consultation Note Generation, computed agreement between evaluators, and quantified the extent to which human error impacts the judgements.",
"We then carried out a correlation study with 18 automatic metrics, discussing their limitations and identifying the most successful ones.",
"We found that the choice of human reference has a significant effect on all automatic metrics and that simple character-based metrics like Levenshtein distance can be more effective than complex model-based metrics for the task of Note Generation.",
"Based on our findings, character-based Levenshtein distance, BertScore, and METEOR are the most suitable metrics to evaluate this task.",
"We release all the data and annotations and welcome researchers to assess further metrics.",
"The authors would like to thank Rachel Young and Tom Knoll for supporting the team and hiring the evaluators, Vitalii Zhelezniak for his advice on revising the paper, and Kristian Boda for helping to set up the Stanza+Snomed fact-extraction system."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"result",
"method",
"abstain",
"other"
] |
[
"Reading long documents to answer open-domain questions remains challenging in natural language understanding.",
"In this paper, we introduce a new model, called RikiNet, which reads Wikipedia pages for natural question answering.",
"RikiNet contains a dynamic paragraph dual-attention reader and a multi-level cascaded answer predictor.",
"The reader dynamically represents the document and question by utilizing a set of complementary attention mechanisms.",
"The representations are then fed into the predictor to obtain the span of the short answer, the paragraph of the long answer, and the answer type in a cascaded manner.",
"On the Natural Questions (NQ) dataset, a single RikiNet achieves 74.3 F1 and 57.9 F1 on long-answer and short-answer tasks.",
"To our best knowledge, it is the first single model that outperforms the single human performance.",
"Furthermore, an ensemble RikiNet obtains 76.1 F1 and 61.3 F1 on long-answer and short-answer tasks, achieving the best performance on the official NQ leaderboard 1 .",
"Machine reading comprehension (MRC) refers to the task of finding answers to given questions by reading and understanding some documents.",
"It represents a challenging benchmark task in natural language understanding (NLU).",
"With the progress of large-scale pre-trained language models (Devlin et al., 2018), state-of-the-art MRC models (Ju et al., 2019; Yang et al., 2019; Lan et al., 2019; Zhang et al., 2019; Liu et al., 2019) have already surpassed human-level performance on certain commonly used MRC benchmark datasets, such as SQuAD 1.1 (Rajpurkar et al., 2016), SQuAD 2.0 (Rajpurkar et al., 2018), and CoQA (Reddy et al., 2019).",
"Recently, a new benchmark MRC dataset called Natural Questions 2 (NQ) (Kwiatkowski et al., 2019) has presented a substantially greater challenge for the existing MRC models.",
"Specifically, there are two main challenges in NQ compared to the previous MRC datasets like SQuAD 2.0.",
"Firstly , instead of providing one relatively short paragraph for each question-answer (QA) pair, NQ gives an entire Wikipedia page which is significantly longer compared to other datasets.",
"Secondly , NQ task not only requires the model to find an answer span (called short answer) to the question like previous MRC tasks but also asks the model to find a paragraph that contains the information required to answer the question (called long answer).",
"In this paper, we focus on the NQ task and propose a new MRC model called RikiNet tailored to its associated challenges, which R eads the W iki pedia pages for natural question answering.",
"For the first challenge of the NQ task mentioned above, RikiNet employs the proposed Dynamic Paragraph Dual-Attention (DPDA) reader which contains multiple DPDA blocks.",
"In each DPDA block, we iteratively perform dual-attention to represent documents and questions, and employ paragraph self-attention with dynamic attention mask to fuse key tokens in each paragraph.",
"The resulting context-aware question representation, question-aware token-level, and paragraph-level representations are fed into the predictor to obtain the answer.",
"The motivations of designing DPDA reader are:",
"(a) Although the entire Wikipedia page contains a large amount of text, one key observation is that most answers are only related to a few words in one paragraph;",
"(b) The final paragraph representation can be used naturally for predicting long answers.",
"2 NQ provides some visual examples of the data at https://ai.google.com/research/ NaturalQuestions/visualization .",
"We describe the details of DPDA reader in 3.1.",
"For the second challenge, unlike prior works on NQ dataset (Alberti et al., 2019b; Pan et al., 2019) that only predict the short answer and directly select its paragraph as long answer, RikiNet employs a multi-level cascaded answer predictor which jointly predict the short answer span, the long answer paragraph, and the answer type in a cascaded manner.",
"Another key intuition motivating our design is that even if the relevant documents are not given, humans can easily judge that some questions have no short answers (Borschinger et al., 2019).",
"Take this question as a motivating exam-ple:What is the origin of the Nobel prize?",
"The answer should be based on a long story, which cannot be easily expressed in a short span of entities.",
"Therefore we also feed the question representation into the predictor as an auxiliary prior to answer type prediction.",
"The details will be given in 3.2.",
"On the NQ test set, our single model obtains 74.3 F1 scores on the long-answer task (LA) and 57.9 F1 scores on the short-answer task (SA) compared to the published best single model (Alberti et al., 2019a) results of 66.8 F1 on LA and 53.9 F1 on SA.",
"To the best of our knowledge, RikiNet is the first single model that outperforms the single human performance (Kwiatkowski et al., 2019) on both LA and SA.",
"Besides, our ensemble model obtains 76.1 F1 on LA and 61.3 F1 on SA, which achieves the best performance of both LA and SA on the official NQ leaderboard.",
"Before we describe our model in detail, we first introduce the notations and problem formalization.",
"Our paper considers the following NQ (Kwiatkowski et al., 2019) task: Given a natural question q , a related Wikipedia page p (in the top 5 search results returned by the Google search engine), the model outputs a paragraph within the Wikipedia page p as the long answer which contains enough information to infer the answer to the question, and an entity span within the long answer that answers the question as the short answer .",
"Also, the short answer of the 1% Wikipedia page is yes or no, instead of a short span.",
"Both long answers and short answers can be NULL ( i.e. , no such answer could be found).",
"Given a natural question q and its paired Wikipedia page p , we tokenize them with the 30,522 wordpiece vocabulary as used in (Devlin et al., 2018).",
"Following (Alberti et al., 2019b; Pan et al., 2019), we generate multiple document spans by splitting the Wikipedia page with a sliding window.",
"Then, we obtain multiple 6-tuple training instances ( q, d, c, s, e, t ) for each NQ data pair ( q, p ) , where q and d are wordpiece IDs of question with length n and document span with length m , c S indicates the paragraph index of the long answer where S is the set that includes all paragraph indexes ( i.e, all long answer candidates) within d , s, e { 0 , 1 , ..., m 1 } are inclusive indices pointing to the start and end of the short answer span, and t { 0 , 1 , 2 , 3 , 4 } represents the five answer types, corresponding to the labels NULL (no answer), SHORT (has short answer), LONG (only has long answer), YES, and NO.",
"For each tuple ( q, d, c, s, e, t ) of the data pair ( q, p ) , RikiNet takes d and q as inputs, and jointly predicts c, s, e, t .",
"Finally we merge the prediction results of every tuple to obtain the final predicted long answer, short answer, and their confidence scores of the data pair ( q, p ) for evaluation.",
"We propose the RikiNet which R eads the W iki pedia pages for natural question answering.",
"As shown in Fig. 1, RikiNet consists of two modules:",
"(a) the dynamic paragraph dual-attention reader as described in 3.1, and",
"(b) the multi-level cascaded answer predictor as described in 3.2.",
"Dynamic Paragraph Dual-Attention (DPDA) reader aims to represent the document span d and the question q .",
"It outputs the context-aware question representation, question-aware token-level document representation, and paragraph-level document representation, which will be all fed into the predictor to obtain the long and short answers.",
"We firstly employ a pre-trained language model such as BERT (Devlin et al., 2018) to obtain the initial question representation Q 0 R n h and the initial document span representation D 0 R m h , where h is the hidden size.",
"Similar to (Devlin et al., 2018), we concatenate a [CLS] token, the tokenized question q with length n , a [SEP] token, the tokenized document span d with length m , and a final [SEP] token.",
"Then we feed the resulting sequence into the pre-trained language model.",
"As shown on the left in Fig. 1, DPDA reader contains multiple Dynamic Paragraph Dual-Attention (DPDA) blocks.",
"The first block takes Q 0 and D 0 as the inputs.",
"The outputs Q ( t ) and D ( t ) of the t -th block are then fed into the next block.",
"Each block contains three types of layers: the dual-attention layer, the paragraph dynamic self-attention layer, and the question self-attention layer.",
"The last DPDA block outputs the final question and document representations.",
"We describe them in detail now.",
"Dual-Attention Layer To strengthen the information fusion from the question to the paragraphs as well as from the paragraphs to the question, we adapt a dual-attention mechanism, which has been shown effective in other MRC models (Xiong et al., 2018; Seo et al., 2017; Xiong et al., 2017).",
"We further tweak it by increasing the depth of attention followed by a residual connection (He et al., 2016) and layer normalization (Ba et al., 2016).",
"In particular, the t -th block first calculates a similarity metric L ( t ) R m n which is then normalized row-wise and column-wise to produce two attention weights: AQ ( t ) R m n , across the document for each token in the question; and AD ( t ) R n m , across the question for each token in the document, L ( t ) = D ( t 1) Q (cid:62) ( t 1) R m n , AQ ( t ) = Softmax (cid:0) L ( t ) (cid:1) R m n , AD ( t ) = Softmax (cid:16) L (cid:62) ( t ) (cid:17) R n m .",
"Similar to (Xiong et al., 2017; Seo et al., 2017), we obtain the question-aware representation of the document by QC ( t ) = (cid:16) D (cid:62) ( t 1) AQ ( t ) (cid:17) (cid:62) R n h , DC ( t ) = (cid:16) AD ( t ) (cid:17) (cid:62) (cid:104) Q ( t 1) ; QC ( t ) (cid:105) R m 2 h , where [ ; ] denotes concatenation.",
"DC ( t ) = (cid:16) Q (cid:62) ( t 1) AD ( t ) (cid:17) (cid:62) R m h , QC ( t ) = (cid:16) AQ ( t ) (cid:17) (cid:62) (cid:104) D ( t 1) ; DC ( t ) (cid:105) R n 2 h .",
"We finally apply the residual connection and layer normalization to both the question and the document representations with the linear transformations.",
"where WD ( t ) R 2 h h and WQ ( t ) R 2 h h are trainable parameters in the dual-attention layer of the t -th block.",
"The document representation DC ( t ) will be fed into the paragraph dynamic self-attention layer to obtain the paragraph representation.",
"The question representation QC ( t ) will be fed into the question self-attention layer to get the question embedding.",
"where the transformer block consists of two sublayers: a multi-head self-attention layer and a position-wise fully connected feed-forward layer.",
"Each sub-layer is placed inside a residual connection with layer normalization.",
"After the last DPDA block, we obtain the final question embedding q R h by applying the mean pooling, q = MeanPooling (cid:16) QC ( T ) (cid:17) R h , where T denotes the number of the DPDA blocks.",
"This question embedding q will be further fed into the predictor for answer type prediction.",
"Paragraph Dynamic Self-Attention Layer This layer is responsible for gathering information on the key tokens in each paragraph.",
"The token-level representation D ( t ) is first given by: D ( t ) = Transformer (cid:16) DC ( t ) (cid:17) R m h .",
"The difference from the original multi-head self-attention in (Vaswani et al., 2017) is that we incorporate two extra attention masks, which will be introduced later in Eq.",
"(3) and (4).",
"The last DPDA block applies a mean pooling to the tokens within the same paragraph to obtain the paragraph representation L R l h as L [ i, :] = MeanPooling L j = i (cid:0)(cid:8) D ( T ) [ j, :] (cid:9)(cid:1) R h , (2) where l denotes the number of paragraph within the document span d ( i.e. , the number of long answer candidates within the document span d ), L [ i, :] is the representation of the i -th paragraph, D ( T ) [ j, :] is the representation of the j -th token at last DPDA block, and L j indicates the index number of the paragraph where the j -th token is located.",
"Tokens in the original multi-head attention layer of the transformer self-attention block attend to all tokens.",
"We introduce two attention masks to the self-attention sub-layer in Eq.",
"(1) based on two key motivations: 1) Each paragraph representation should focus on the question-aware token information inside the paragraph; 2) Most of the answers are only related to a few words in a paragraph.",
"For the first motivation, we introduce the paragraph attention mask ML R m m which is defined as: ML [ i, j ] = (cid:26) 0 , if L i = L j , , otherwise .",
"It forces each token to only attend to the tokens within the same paragraph.",
"Therefore, each paragraph representation focuses on its internal token information after the mean pooling of Eq.",
"(2).",
"Based on the second motivation, we dynamically generate another attention mask to select key tokens before self-attention.",
"We use a neural network F ( t ) called scorer with the Sigmoid activation function to calculate the importance score for each token: ( t ) = F ( t ) (cid:16) DC ( t ) (cid:17) R m 1 , Then we obtain the dynamic attention mask M ( t ) R m m by selecting topK tokens 3 M ( t ) [ i, j ] = (cid:26) 0 , if i S ( t ) and j S ( t ) , otherwise , (4) where S ( t ) = argmax-K k [0 ,m 1] (cid:0)(cid:8) ( t ) [ k ] (cid:9)(cid:1) .",
"( t ) [ k ] denotes the score of the k -th token at t th block, K is a hyperparameter, and S ( t ) is the set that includes the index of the selected topK tokens.",
"This attention mask lets the paragraph representation concentrate on the selected key tokens.",
"The final scaled dot-product attention weight A ( t ) R m m of the multi-head self-attention sublayer (Vaswani et al., 2017) in Eq.",
"(1) with two proposed attention masks can be written as: A ( t ) = Softmax M ( t ) + ML + (cid:16) DC ( t ) DC ( t ) (cid:62) (cid:17) h .",
"3 Following Zhuang and Wang (2019), our implementation pads the unselected token representations with zero embeddings and adds the scorer representation with the linear transformation to D ( t ) to avoid gradient vanishing for scorer training.",
"Due to the nature of the NQ tasks, a short answer is always contained within a long answer, and thus it makes sense to use the prediction of long answers to facilitate the process of obtaining short answers.",
"As shown on the right in Fig. 1, we design a cascaded structure to exploit this dependency.",
"This predictor takes the token representation D ( T ) , the paragraph representation L , and the question embedding q as inputs to predict four outputs in a cascaded manner: (1) long answer (2) the start position of the short answer span (3) the end position of the short answer span (4) the answer type.",
"That is, the previous results are used for the next tasks as indicated by the notation .",
"Long Answer Prediction We employ a dense layer FL with Tanh activation function as long answer prediction layer, which takes the paragraph representation L R l h as input to obtain the long-answer prediction representation HL R l h .",
"Then the long-answer logits o L are computed with a linear layer HL = FL ( L ) R l h , o L = HLWL R l , where WL R h 1 is a trainable parameter.",
"Short Answer Prediction Firstly, we use the long-answer prediction representation HL and the token representation D ( T ) as the inputs to predict the start position of the short answer.",
"Then the prediction representation of the start position of the short answer will be re-used to predict the end position.",
"Since the row-dimension of D ( T ) R m h is different from that of HL R l h , we cannot directly concatenate the HL to D ( T ) .",
"We tile the HL R l h with HL R m h along the row-dimension: HL [ i, :] = HL [ L i , :] R h .",
"Note that L i indicates the index number of the paragraph where the i -th token is located.",
"Thus, the model can consider the prediction information of the long answer when predicting the short answer.",
"Similarly, the start and end position logits of the short answer are predicted by, HS = FS (cid:0)(cid:2) HL ; D ( T ) (cid:3)(cid:1) R m h , o S = HSWS R m , HE = FE (cid:0)(cid:2) HS ; D ( T ) (cid:3)(cid:1) R m h , o E = HEWE R m , where o S and o E are the output logit vectors of the start positions and the end positions of the short answer, FS and FE are two dense layers with Tanh activation function, and WS R h 1 , WE R h 1 are trainable parameters.",
"Answer Type Prediction Finally, the predictor outputs the answer type.",
"There are five answer types as discussed in 2. With the observation that humans can easily judge that some questions have no short answers even without seeing the document, we treat the question embedding q R h as an auxiliary input for the answer type prediction.",
"Besides, the token representation D ( T ) and the short-answer prediction representation HE are also used for that prediction: d = MeanPooling (cid:0) D ( T ) (cid:1) R h , e = MaxPooling (cid:0) HE (cid:1) R h , h T = FT ([ d ; q ; e ]) R h , o T = Softmax (cid:0) h TWT (cid:1) R 5 , where o T is the logits of the five answer types, FT is a dense layer with Tanh activation function, and WT R h 5 is a trainable parameter.",
"Training Loss and Inference For training, we compute cross-entropy loss over the above mentioned output logits, and jointly minimize these four cross-entropy losses as: L = LL + LS + LE + LT .",
"During inference, we calculate the final long-answer score L for all the paragraphs within the Wikipedia page based on the long-answer logits o L and the answer type logits o T .",
"The long-answer score of paragraph c can be written as L ( c ) = o L [ c ] + (cid:32) 4 (cid:88) t =1 o T [ t ] o T [0] (cid:33) (cid:124) (cid:123)(cid:122) (cid:125) answer type score , where o T [0] denotes the logits where the answer type is NULL(no answer), (cid:80) 4 t =1 o T [ t ] denotes the sum of the logits where the answer type is not NULL.",
"The answer type score can be seen as a bias of each document span in the Wikipedia page.",
"Then we select the paragraph of the highest long-answer score L over the entire Wikipedia page as the long answer.",
"Similarly, the short-answer score of the corresponding span ( s, e ) is calculate by S ( s, e ) = (cid:0) o S [ s ] + o E [ e ] (cid:1) (cid:124) (cid:123)(cid:122) (cid:125) answer span score + (cid:0) o T [1] o T [0] (cid:1) (cid:124) (cid:123)(cid:122) (cid:125) answer type score , where o T [1] denotes the score where the answer type is SHORT(has short answer).",
"We select the short answer span which has the highest short-answer score S within the long answer as the final short answer.",
"We use the official NQ evaluation script to set two separate thresholds for predicting whether the two types of answers are answerable.",
"We focus on the Natural Questions (NQ) (Kwiatkowski et al., 2019) dataset in this work.",
"The public release of the NQ dataset consists of 307,373 training examples and 7,830 examples for development data (dev set).",
"NQ provides a blind test set contains 7,842 examples, which can only be accessed through a public leaderboard submission.",
"As discussed in 2, we generate multiple document spans by splitting the Wikipedia page with a sliding window.",
"Following (Pan et al., 2019; Alberti et al., 2019b), the size and stride of the sliding window are set to 512 and 192 tokens respectively.",
"The average number of document spans of one Wikipedia page is about 22.",
"Since most of the document span does not contain the answer, the number of negative samples ( i.e., no answer) and positive samples ( i.e., has answers) is extremely imbalanced.",
"We follow (Pan et al., 2019; Alberti et al., 2019b) to sub-sample negative instances for training, where the rate of sub-sampling negative instance is the same as in (Pan et al., 2019).",
"As a result, there are 469,062 training instances in total.",
"We use Adam optimizer (Kingma and Ba, 2015) with a batch size of 36 for model training.",
"The initial learning rate, the learning rate warmup proportion, the training epoch, the hidden size h , the number of blocks T , and the hyperparameter K are set to 2 10 5 , 0 .",
"1 , 2 , 1024 , 2 , and 256 respectively.",
"Our model takes approximately 24 hours to train with 4 Nvidia Tesla P40.",
"Evaluation completed in about 6 hours on the NQ dev and test set with a single Nvidia Tesla P100.",
"We use the Google released BERT-large model fine-tuned with synthetic self-training (Alberti et al., 2019a) to encode the document and question as described in 3.1.1.",
"We also compare the performance of RikiNet which uses the pre-trained RoBERTa large model (Liu et al., 2019).",
"It should be noted that our RikiNet is orthogonal to the choice of a particular pre-trained language model.",
"We present a comparison between previously published works on the NQ task and our RikiNet.",
"We report the results of the precision (P), the recall (R), and the F1 score for the long-answer (LA) and short-answer (SA) tasks on both test set and dev set in Tab.",
"1. The first two lines of Tab.",
"1 show the results of two multi-passage MRC baseline models presented in the original NQ paper (Kwiatkowski et al., 2019).",
"The third to sixth lines show the results of the previous state-of-the-art models.",
"These models all employ the BERT large model and perform better than that two baselines.",
"Our RikiNet-BERT large also employs the BERT large model, and its single model has achieved a significant improvement over the previously published best model on the test set (LA from 66.8 F1 to 74.3 F1, and SA from 53.9 F1 to 57.9 F1).",
"To the best of our knowledge, this is the first 4 single model that surpasses the single human performance (Kwiatkowski et al., 2019) on both LA and SA tasks.",
"We also provide a BERT joint (Alberti et al., 2019b) + RoBERTa large (Liu et al., 2019) baseline on NQ, which only replaces the BERT large in BERT joint method with RoBERTa large .",
"To be expected, the BERT joint + RoBERTa large performs better than original BERT joint .",
"Furthermore, our single model of RikiNet-RoBERTa large which employs RoBERTa large model also achieves better performance on both LA and SA, significantly outperforming BERT joint + RoBERTa large .",
"These results demonstrate the effectiveness of our RikiNet.",
"Since most submissions on the NQ leaderboard are ensemble models, we also report the results of our ensemble model, which consists of three RikiNet-RoBERTa large models with different hyper-parameters.",
"At the time of submission (29 Nov. 2019), the NQ leaderboard shows that our ensemble model achieves the best performance on both LA (F1 76.1) and SA (F1 61.3).",
"4 The single RikiNet-BERT large model was submitted to the NQ public leaderboard on 7 Nov. 2019.",
"RikiNet consists of two key parts: DPDA reader and multi-level cascaded answer predictor.",
"To get a better insight into RikiNet, we conduct an in-depth ablation study on probing these two modules.",
"We report the LA and SA F1 scores on the dev set.",
"Ablations of DPDA Reader We keep the predictor and remove the component of the DPDA reader.",
"The results are shown in Tab.",
"2. In",
"(a), we remove the entire DPDA reader as introduced in 3.1 except BERT large .",
"In",
"(b),",
"(c), and",
"(d), we remove the dual-attention layer, question self-attention layer, and paragraph dynamic self-attention layer as described in 3.1.1 respectively.",
"In",
"(e) and",
"(f), we remove the paragraph attention mask of Eq.",
"(3) and the dynamic attention mask of Eq.",
"(4) respectively.",
"We can see that after removing the DPDA reader, the performance drops sharply.",
"In addition, the paragraph dynamic self-attention layer has the greatest impact on performance.",
"Moreover, both the paragraph attention mask and dynamic attention mask contribute to the performance improvement.",
"We also change the hyper-parameter K and the number of blocks T .",
"Results show that the setting of K = 384 performs better than K = 512 ( i.e. , no dynamic attention mask), and K = 256 performs best.",
"For the number of DPDA blocks T , the model achieves the best performance when T = 2 .",
"Ablations of Predictor On the predictor side, we further remove or replace its component and report the results in Tab.",
"3. In (1) we remove the whole DPDA reader and predictor.",
"In (2), we reSetting LA F1 SA F1 RikiNet-BERT large (Full) 73.9 57.7",
"move the way of multi-level prediction ( i.e. , training the model to predict long and short answer jointly) described in 3.2, and follow the previous work (Alberti et al., 2019b) to directly predict the short answer and then select its paragraph as the long answer.",
"We can see that our multi-level prediction is critical to the long answer prediction.",
"In (3) we only remove the cascaded structure but keep the multi-level prediction, which means that the prediction representations are no longer used as input for other predictions, the performance of both long and short answers drops about 1.0 F1 score.",
"In (4) we change the ordering of cascaded process.",
"That is instead of considering long an-Setting LA F1 SA F1 RikiNet-BERT large (Full) 73.9 57.7 (1) DPDA reader & Predictor 65.9 55.1 (2) Multi-level prediction 70.9 57.1 (3) Cascaded structure 73.0 56.7 (4) + S2L cascaded structure 73.6 57.5 (5) Question embedding 73.4 57.4 (6) Tanh dense prediction layer 73.2 57.3 (7) + Bi-LSTM prediction layer 73.3 57.4 (8) + Transformer prediction layer 73.5 57.5 (9) + GELU dense prediction layer 73.7 57.6 Table 3: Ablations of multi-level cascaded predictor on dev set of NQ dataset.",
"swer first and then short answer as described in 3.2, we consider the cascaded structure of short answer first and then long answer.",
"However, we get slightly worse results in this way.",
"In (5), we remove the question embedding which is used for answer type prediction.",
"It can be observed that the question embedding contributes to performance improvement.",
"In the variants of (6)-(9), we remove the dense prediction layers with Tanh activation function and replace it with Bi-directional Long-Short Term Memory (Bi-LSTM) (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) layers, transformer self-attention blocks, and dense prediction layers with Gaussian Error Linear Unit GELU (Hendrycks and Gimpel, 2016) activation function but neither get better performance.",
"Natural Questions (NQ) dataset (Kwiatkowski et al., 2019) has been recently proposed, where each question is paired with an entire Wikipedia page which is a long document containing multiple passages.",
"Although BERT (Devlin et al., 2018) based MRC models have surpassed human performance on several MRC benchmark datasets (Lan et al., 2019; Devlin et al., 2018; Liu et al., 2019; Rajpurkar et al., 2018), a similar BERT method (Al-berti et al., 2019b) still has a big gap with human performance on NQ dataset.",
"There are several recently proposed deep learning approaches for multi-passage reading comprehension.",
"Chen et al. (2017) propose DrQA which contains a document retriever and a document reader (DocReader).",
"Clark and Gardner (2018) introduce Document-QA which utilizes TF-IDF for paragraph selection and uses a shared normalization training objective.",
"De Cao et al. (2019) employ graph convolutional networks (GCNs) for this task.",
"Zhuang and Wang (2019) design a gated token-level selection mechanism with a local convolution.",
"In contrast, our RikiNet considers multi-level representations with a set of complementary attention mechanisms.",
"To solve the NQ task, Kwiatkowski et al. (2019) adapt Document-QA (Clark and Gardner, 2018) for NQ, and also utilizes DecAtt (Parikh et al., 2016) for paragraph selection and DocReader (Chen et al., 2017) for answer prediction.",
"BERT joint (Alberti et al., 2019b) modifies BERT for NQ.",
"Besides, some works focus on using data augmentation to improve the MRC models on NQ.",
"Alberti et al. (2019a) propose a synthetic QA corpora generation method based on roundtrip consistency.",
"Glass et al. (2019) propose a span selection method for BERT pre-training (SSPT).",
"More recently, Pan et al. (2019) introduce attention-over-attention (Cui et al., 2017) into the BERT model.",
"Pan et al. (2019) also propose several techniques of data augmentation and model ensemble to further improve the model performance on NQ.",
"Although the use of data augmentation and other advanced pre-trained language models (Lan et al., 2019) may further improve model performance, as this is not the main focus of this paper, we leave them as our future work.",
"Our RikiNet is a new MRC model designed tailored to the NQ challenges and can effectively represent the document and question at multi-levels to jointly predict the answers, which significantly outperforms the above methods.",
"We propose the RikiNet, which reads the Wikipedia pages to answer the natural question.",
"The RikiNet consists of a dynamic paragraph dual-attention reader which learns the token-level, paragraph-level and question representations, and a multilevel cascaded answer predictor which jointly predicts the long and short answers in a cascade manner.",
"On the Natural Questions dataset, the RikiNet is the first single model that outperforms the single human performance.",
"Furthermore, the RikiNet ensemble achieves the new state-of-the-art results at 76.1 F1 on long-answer and 61.3 F1 on short-answer tasks, which significantly outperforms all the other models on both criteria.",
"This work is supported by National Natural Science Fund for Distinguished Young Scholar (Grant No. 61625204) and partially supported by the Key Program of National Science Foundation of China (Grant No. 61836006)."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"A common issue in real-world applications of named entity recognition and classification (NERC) is the absence of annotated data for target entity classes during training.",
"Zero-shot learning approaches address this issue by learning models that can transfer information from observed classes in the training data to unseen classes.",
"This paper presents the first approach for zero-shot NERC, introducing a novel architecture that leverage the fact that textual descriptions for many entity classes occur naturally.",
"Our architecture addresses the zero-shot NERC specific challenge that the not-an-entity class is not well defined, since different entity classes are considered in training and testing.",
"For evaluation, we adapt two datasets, OntoNotes and MedMentions, emulating the difficulty of real-world zero-shot learning by testing models on the rarest entity classes.",
"Our proposed approach outperforms baselines adapted from machine reading comprehension and zero-shot text classification.",
"Furthermore, we assess the effect of different class descriptions for this task.",
"Named entity recognition and classification (NERC) is the task of identifying spans of text corresponding to named entities and classifying these spans from a set of pre-defined entity classes.",
"A prevalent issue for many real-world applications is that annotated data does not readily exist.",
"This motivates the focus on the zero-shot setting (Xian et al., 2018; Wang et al., 2019), where annotated data is not available for the classes of interest.",
"Instead, information available from observed classes must be transferred to unseen target classes.",
"were explored for entity linking (EL) (Logeswaran et al., 2019; Wu et al., 2020) and named entity typing (NET) (Obeidat et al., 2019), which are similar to the NERC subtask of named entity classification (NEC).",
"However, no previous work has addressed the task of zero-shot NERC, which additionally requires the detection of which tokens make up an entity in addition to its type, i.e. Named Entity Recognition (NER).",
"This paper is the first to study zero-shot NERC, by leveraging entity type descriptions.",
"The task is illustrated in Figure 1.",
"During testing, the input is a sentence and a set of target entity classes.",
"each accompanied by its description, and the goal is to recognize and classify entities in these target classes.",
"Descriptions contain crucial information for the task.",
"Given as input Shantou Harbour, a natural river seaport, opens to the South China Sea. and a class Facility in Figure 1, using a description Names of human-made structures: infrastructure (streets, bridges), [...] a connection between Facility and Shantou Harbour can be made without having seen an annotated example in training.",
"While using descriptions enables us to predict entity classes unseen in training, NERC poses the additional challenge of modelling the negative class (non-entity tokens) as its defi-nition includes different entity classes and tokens in training and testing.",
"It is possible that words observed as non-entities during training belong to one of the test classes, as seen in Figure 1: both Huaqiao Park , in training, and Shantou Harbour , during testing, are entities of the class Facility , however, Huaqiao Park is labelled as a non-entity in the former.",
"Based on this insight we propose several architectures for NERC based on cross-attention between the sentence and the entity type descriptions using transformers (Vaswani et al., 2017) combined with pre-training (Devlin et al., 2019).",
"We NERC model Dev LOCATION PRODUCT WORK OF ART (WOA) Outside Names of geographical locations.",
"explore modelling the negative class by",
"(i) using a description for the negative class,",
"(ii) modelling the negative class directly,",
"(iii) modelling the negative class using the representations generated for the classes corresponding to types.",
"For evaluation we introduce zero-shot adaptations to two real-world NERC datasets with distinct properties: the OntoNotes (Pradhan et al., 2013) as well as the highly domain-specific MedMentions dataset (Mohan and Li, 2019).",
"The adaptations adhere to recommendations to zero-shot evaluation (Xian et al., 2018) by evaluating models on the rarest classes while ensuring that all class sets are disjoint.",
"Our best model achieves a macro F 1 of 0 .",
"45 on OntoNotes-ZS and 0 .",
"38 on MedMentions-ZS , outperforming a state-of-the-art MRC model for NERC (Li et al., 2020; Sun et al., 2020) and an adapted zero-shot text classification model (Yin et al., 2019).",
"An analysis on the classification and recognition task in isolation highlights the importance of the description choice, finding that annotation guidelines result in higher scores than the class name itself or Wikipedia passages.",
"In NERC, given a sentence s = w 1 , ..., w n of length n and a description d c for each class c C ts in the test set, we predict a sequence of labels y ( C ts ) n , with n being the length of the sentence.",
"We model the task as multiclass classification, which despite ignoring the sequential structure of the output, it has been found to be competitive (Lample et al., 2016; Rei, 2017).",
"Thus, we predict the correct class for each token w at position t : arg max c C ts F ( s, w t , d c ) , using a suitable function F modelling the semantic affinity between w t and d c in the context of s .",
"The parameters of F need to be learned without annotated data for C ts , but with annotated data and descriptions for the training C tr classes.",
"To model F we focus on the use of cross-attention (Humeau et al., 2019; Wolf et al., 2019b) in the form of a transformer encoder (Vaswani et al., 2017).",
"For each type description d c , the cross-attention encoder (X-ENC) generates a vector representation v t,c R h for a token w t in the sentence s : v 1 ,c , ..., v n,c = X-ENC ( s, d c ) .",
"o t,c = v t,c w + b, (2) with v t,c R h and o t,c R .",
"The value o t,c indicates how likely is that token w t belongs to entity class c .",
"In order to be able to recognize entities in addition to classifying them, the scores for each token o t,c 1 ; ... ; o t,c k are concatenated with a score for belonging to the negative class o t,neg , corresponding to not belonging to any of the types considered: o t = ( o t,c 1 ; ... ; o t,c k ; o t,neg ) (3) with o t R k +1 .",
"Obtaining a good estimate for this score is a key challenge in performing zero shot NERC and we discuss it in the next section.",
"We then select the class with the highest score probability after applying a softmax operation: y t = arg max c C ts F ( s, w t , d c ) = arg max c C ts o t,c (cid:80) c (cid:48) C ts o t,c (cid:48) .",
"(4) We label this model Sequential Multiclass Cross-attention Model (SMXM).",
"Referring to the initial example, cross-attention enables Shantou Harbour to attend to infrastructure in the type description of the class Facility , generating a representation for this token based on the type description in the context of the sentence.",
"Cross-attention Encoder The cross-attention model is based on the pre-trained transformer encoder BERT (Devlin et al., 2019) which allows the model to capture surface-level information as well as semantic aspects between all words of the input (Jawahar et al., 2019).",
"For X-ENC the input tuple ( s, d c ) is structured in the form: x X-ENC = [CLS] s [SEP] d c [SEP].",
"As discussed in Section 1, the non-entity class creates a challenging setting it is possible that words observed as non-entities during training belong to one of the test classes.",
"We explore three approaches to modelling the negative class:",
"(i) using a (textual) description for the negative class,",
"(ii) modelling the negative class directly,",
"(iii) modelling the negative class using the representations generated for the classes corresponding to types.",
"Description-based encoding Assuming a description for the negative class d neg , it is straightforward to obtain a representation v t,neg for each token belonging to it using the cross-attention encoder, which is then transformed to a score via a weight vector w neg for this class: o t,neg = v t,neg w Tneg + b neg (5) However, this approach requires a description to describe something that is not rather than is.",
"This makes it very difficult in practice to make an informed decision on the most suitable description.",
"Also, non-entity tokens are likely to differ between training and testing, thus a fixed description is unlikely to perform well.",
"Independent encoding The negative class can be directly modelled since it is observed in the training data.",
"Thus, instead of exploring cross-attention, each token is represented for the negative class in the context of the sentence without taking any description into account: v 1 ,neg , ..., v n,neg = ENC ( s ) , (6) with ENC being a standard transformer encoder (Vaswani et al., 2017).",
"Similar to the description-based approach, v t,neg is linearly transformed to o t,neg using a separate vector w neg (c.f. Eq. 5).",
"Class-aware encoding Description-based and independent encodings do not model the fact that not every entity labelled as a non-entity during training is a non-entity during testing in zero-shot NERC.",
"Instead, we propose to model the negative class by combining the representations generated for the other classes, as generated by the cross-attention encoder (Eq. 1): v t,c 0 , ..., v t,ck .",
"Each vector is then linearly transformed, using w neg cl and then concatenated to a feature map m .",
"We then apply a max-pooling operation over this feature set and take the maximum value: o t,neg cl = max { m } .",
"Finally, we compute o t,neg by linearly combining the representation from the independent encoding and o t,neg cl .",
"To prevent the cross-attention encoder from over-fitting on the few class descriptions, we use a regularizer in the form of entity masking , inspired by the masked language modelling objective used in BERT, to train the model on the training classes C tr .",
"During training with a probability p (tuned as a hyperparameter) the entire entity that is to be classified is masked in the input to the model.",
"This regularization avoids lexical memorization and encourages the model to learn entity context to class description affinities, while still learning to incorporate aspects of the entity itself (e.g. capitalization, shape, morphology) and relating them to the type description.",
"A cross-attention model for tasks such as EL is much less likely to overfit since each entity is associated with a unique description and there is a much larger number of them than entity classes.",
"Due to the label imbalance caused by the OntoNotes-ZS MedMentions-ZS Statistic Train Dev Test Train Dev Test # sentences 59924 8528 8262 28226 9302 9382 # words 1088503 147724 152728 721552 242358 241786 # total entities 54576 1785 1754 113095 1710 1431 # compound entities 31257 905 1628 59031 806 637 # consecutive entities 7902 49 121 30545 125 152 # consecutive entities of same class 3448 39 95 14727 120 147 # unique mentions (not in Train) 634 495 574 721 Table 1: Quantitative statistics of zero-shot dataset OntoNotes-ZS and MedMentions-ZS .",
"negative class, we use class weights q c incorporated to the cross-entropy loss: c (cid:88) i =1 q i p ( y t,i ) log ( p ( y t,i )) .",
"While the factor q is kept to 1 for all non-negative classes, for the negative class q is set using the underlying training dataset distributions using the ratio # entities # non-entity words and further tuned within that range as a hyperparameter.",
"We present adaptations to OntoNotes (Pradhan et al., 2013) and MedMentions (Mohan and Li, 2019) for zero-shot NERC evaluation.",
"OntoNotes is a common benchmark dataset for NERC systems while the more recent MedMentions dataset consists of domain-specific biomedical data.",
"The annotations in the latter are based on the Unified Medical Language System (UMLS) ontology (Bo-denreider, 2004) and do not only include proper named entities but also concepts .",
"For instance, in the passage modeling nurse-patients , modeling is annotated with the concept Research Activity , thus rendering it more challenging.",
"The adaptations follow recommendations for zero-shot evaluation by Xian et al. (2018):",
"(i) Zero-shot methods should be evaluated on the rarer classes, as in real-world scenarios annotated data is likely to be available for the more common ones,",
"(ii) Evaluation metrics should focus on per-class averaged scores to account for the imbalance in terms of samples per class, thus we evaluate our models with the macro-averaged F 1 metric,",
"(iii) Hyperparameters have to be tuned on a development set of classes disjoint from both the training and test set,",
"(iv) Pre-trained neural networks used for zero-shot learning can be trained on arbitrary amount of data as long as the training data does not contain samples of the test set.",
"To create the zero-shot versions of both OntoNotes and MedMentions abiding by rule",
"(i) we measure the frequencies of their respective entity types and keep the four and eleven most frequent ones in OntoNotes and MedMentions, respectively, for training.",
"The remaining ones are split between development and test set by sorting them by frequency and then assigning them alternating between the two sets.",
"To create the zero-shot splits we use the default data splits and remove all annotations of classes that are not associated with the respective split.",
"Quantitative statistics of OntoNotes-ZS and MedMentions-ZS are shown in Table 1.",
"In addition to ensuring that we evaluate on the rarer classes, we also wanted to ensure the classes considered are not trivial to recognize.",
"For example, the class PERCENT in OntoNotes is only assigned to percentages, whose surface form follow regular patterns, while WORK OF ART or PRODUCT are more difficult to recognize.",
"Based on the annotation guidelines of Ontonotes, seven classes were identified to be trivial to recognize (c.f. denoted with in Table 2).",
"To verify this, a simple rule-based system developed for these classes achieved between 0 .",
"60 and 0 .",
"89 micro F 1 , only slightly worse than the fully supervised state-of-the-art NERC model of (Li et al., 2020) (see supplementary material).",
"These classes were excluded from our experiments.",
"We did not identify such trivial classes in MedMentions.",
"A basic description is to simply use the class name itself.",
"In addition, we consider three readily available type description sources for each dataset.",
"The options for OntoNotes are: Annotation guidelines [GL] They have been used to annotated the dataset.",
"These descriptions are highly informative containing precise defini-tions accompanied by examples, as they should help a human perform the task.",
"Wikipedia descriptions, as well as: UMLS Semantic Network [SN] Since the MedMentions dataset is based on the UMLS ontology we explore the short descriptions provided by the UMLS Semantic Network Browser 1 .",
"UMLS Metathesaurus [MT] The Metathesaurus 2 browser is a search engine that agglomerates information of different biomedical sources.",
"For entity type not found in it, semantically similar or subordinate classes are used, e.g. Biomedical Research for Biomedical Occupation or Discipline .",
"Quantitative characteristcs of the description types are shown in table 3.",
"To obtain negative type descriptions, three manually selected sentences from the training set are used that are free of any named entities.",
"We also explored alternating between multiple negative descriptions that we had compiled, however, results were generally worse.",
"All models are implemented using PyTorch (Paszke et al., 2017) and the huggingface implementation (Wolf et al., 2019a) of BERT, using the case-sensitive version of BERT-Large unless otherwise stated.",
"The results reported are the averages of two runs.",
"All Ior Bprefixes to a label were removed for simplicity.",
"Therefore, each entity class is defined by a single label.",
"This simplification results in ambiguity for the NERC task in the case of two consecutive named-entities of the same class, however it reduces the model parameters by half while affecting 5.8% of the entities across the validation and test splits of both datasets (c.f. row # consecutive entities of same class in Table 1).",
"Sentences without any annotations were also excluded.",
"The pre-training data of BERT has been compared to the development and test splits of both datasets to ensure that it has not been pre-trained on testing data (rule",
"(iv) of Xian et al. (2018)) 3 .",
"The hyperparameters for each model were mainly optimized on the validation split of the OntoNotes dataset considering only the non-trivial classes, and then used for the experiments with the MedMentions-ZS dataset.",
"Only the learning rate was tuned for MedMentions-ZS separately.",
"The best model according to development macro-averaged F 1 during training was tested in all experiments on both datasets.",
"Further details 2 https://uts.nlm.nih.gov/uts/umls 3 The dataset has been compared only to the latest Wikipedia dump as the book corpus is not hosted anymore.",
"on the hyperparameter choice are in the supplementary material.",
"While a simple Tf-idf similarity baseline that measures the overlap between the sentence and entity description by computing the cosine similarity shown to be a good baseline for zero-shot entity linking (Logeswaran et al., 2019), F 1 scores on NERC were consistently below 0 .",
"04 on both datasets.",
"Similar observation applies to similarity scores based on word2vec embeddings (Mikolov et al., 2013) as used in (Yin et al., 2019), highlighting the difficulty of this task.",
"Our baselines thus focus on current state-of-the-art models in both NERC and related zero-shot tasks.",
"Binary Entailment Model (BEM) is an NERC adjusted model of the state-of-the-art approach for zero-shot text classification (Yin et al., 2019).",
"They employ BERT, fine-tuned on an entailment dataset, to classify whether a class description ( The text is about X ) is entailed by the text.",
"To adapt this model to NERC, we modify the description to The word is of type X with X being the entity class name, and classify each word instead of the entire sentence.",
"Since their model generates a binary output for each class, the negative prediction for all classes predicts the negative class.",
"By treating each sentence-description pair independently, the relationship between classes as well as the complexity of the negative class in zero-shot evaluation is ignored.",
"We fine-tune BERT-Large on MNLI (Williams et al., 2018), as it performed best in the experiments of (Yin et al., 2019), before training BEM on the zero-shot datasets using adjusted class weights, which has been crucial for successful training of the model; not using it resulted in degenerated solutions in preliminary experiments.",
"The proposed entity masking objective is not suitable for BEM's binary classification approach as it would simply learns to predict the masked token to be an entity during training.",
"MRC for NERC is an approach by Li et al. (2020) who construct queries for entity classes and transform NERC to a machine reading comprehension task for fully supervised flat and nested NERC.",
"Their model generates a span by predicting start and end indices for each entity as well as a matching score for each possible start-end index.",
"Predictions for each entity type are made independently, similar to BEM.",
"Their model showed Ontonotes-ZS Model Dev Test Token Span Token Span BEM 0.28 0.18 0.23 0.11 MRC 0.15 0.15 0.22 0.18 SMXM 0.35 0.23 0.45 0.25 SMXM base 0.30 0.19 0.42 0.20 MedMentions-ZS Model Dev Test Token Span Token Span BEM 0.28 0.19 0.34 0.22 MRC 0.19 0.21 0.23 0.26 SMXM 0.33 0.23 0.38 0.27 SMXM base 0.31 0.20 0.30 0.21 Table 4: Macro-averaged F 1 of NERC on OntoNotes-ZS and MedMentions-ZS , reporting token-based and span-based scores for all baselines and SMXM with class-aware encoding.",
"promising results for the transfer learning experiment when training on the CoNLL03 dataset and testing on OntoNotes, with the latter consisting of a superset of CoNLL03 entity classes, yet it was not tested on completely distinct training and test labels, i.e. zero-shot learning.",
"However, results for our zero-shot task were too low to be considered.",
"We hypothesise two causes:",
"i) In our zero-shot setup the dataset is heavily imbalanced, as most token spans are not entities (typically one to three out of n 2 in a sentence of length n )",
"ii) an incorrect prediction in either the start index, end index, or matching score results in an overall incorrect span, and the accuracy for each of these is unlikely to be high in the zero shot setup.",
"Thus, we simplified the model by excluding the matching matrix, and we use the start and end index with greedy closest-matching to compute the entity span, similar to (Sun et al., 2020).",
"MRC also has been trained using adjusted class weights.",
"NERC Results for both datasets are shown in Table 4, for both token and span-level F 1 .",
"We only report results on the best performing entity description which is the same across all models, i.e. annotation guidelines and Metathesaurus descriptions for OntoNotes-ZS and MedMentions-ZS , respectively; we discuss the impact of description choice in the next section.",
"Shown SMXM results use class-aware encoding of the negative class since it performed better than the other approaches considered (c.f. section 4.4).",
"Statistical significance was determined using the two-tailed Monte Carlo permutation test with 5000 repetitions with p < 0 .",
"05 .",
"Our proposed model, SMXM, performs significantly better than all models on both datasets, with a token-level score of 0 .",
"45 on and 0 .",
"38 for OntoNotes-ZS and MedMentions-ZS , respectively.",
"Comparing SMXM with SMXM base , trained on the smaller BERT-Base (335M vs 109M parameters) highlights the value of larger scale pretraining for domain-specific applications.",
"Scores decrease on both datasets when using the smaller model, with a substantial decrease on MedMentions-ZS to only 0 .",
"30 .",
"Despite its smaller size, SMXM with Bert-Base remains competitive to both BEM and MRC which use BERT-Large.",
"The BEM baseline achieves significantly better token-level scores than MRC for NERC on the development split of OntoNotes-ZS and on both splits of MedMentions .",
"While the MRC for NERC model achieves poor token-level results, its span-level scores are more comparable to BEM and SMXM, even significantly outperforming BEM on the MedMentions-ZS development split despite a much lower token-level score.",
"MRC for NERC has the smallest delta between the token and span-level score out of all models, yet overall scores remained low due to the difficulty of inferring the correct start and end index based only on the description in a zero-shot setup and generalizing to new, unseen types, e.g. determining whether the article the belongs to an entity or not ( the is part of DATE but generally not of PRODUCT ).",
"Per-class scores Scores for each class using SMXM are shown in Table",
"5. For OntoNotes, scores are comparable across the different classes, with WORK OF ART performing worse than the others.",
"In contast, for MedMentions some classes are recognized and classified with comparably high accuracy, such as Bacterium , while Body Substance and Body System score very low.",
"A possible explanation is the similarity (in semantics and/or description) between these classes and classes used for training.",
"For instance, some example entities in Body System 's description are also found in Anatomical Structure (e.g. cardiovascular system ).",
"This would further explain the very high recall but low precision, as entities belonging to the training classes are (erro-neously) identified as entities of these test classes.",
"Analysis of entity descriptions Results on the development set using SMXM with the different entity descriptions introduced in section 3.2 are shown in Table",
"6. Annotation guideline descriptions performs significantly better than all other descriptions on OntoNotes-ZS .",
"Metathesaurus descriptions work best on MedMentions-ZS , with Semantic Network descriptions performing only slightly worse.",
"Using the class name is a surprisingly strong baseline description, performing comparably to WordNet descriptions on OntoNotes-ZS and even better than Wikipedia on MedMentions-ZS .",
"While Wikipedia works well on general types, it performs poorly on the domain-specific types of MedMentions-ZS .",
"Analysing the scores, we identified three properties of descriptions with negative effect on performance: vagueness, noise, and negation.",
"Most UMLS based type descriptions are abstract or un-derspecified, and require either substantial background information or expert knowledge to be useful; for instance, Eukaryote in SN description ( One of the three domains of life, also called Eukarya. These are organisms whose cells are enclosed in membranes and possess a nucleus. ).",
"Furthermore, many descriptions contain noise or unrelated information (e.g. those obtained by Wikipedia).",
"Finally, classes defined by negations or cross-references to other classes result in worse performance, as negated sentences add less information about the class in question.",
"Cross-references cannot be processed by any of the models, as they cannot directly link parts of a class' description to another.",
"Exploring this semi-structured knowledge is interesting future work.",
"On the other hand, we found that explicit examples (e,g.",
"infrastructure (streets, bridges) ) and mentions of syntactic and morphological cues (e.g. These are usually surrounded by quotation marks in the article [...] ) make the annotation guidelines perform particularly well.",
"To validate this qualitative analysis, we modified each dataset's best performing description with the aim to make one worse and one better.",
"First, we worsen the annotation guidelines by removing all explicit mentions of entities and syntactic cues.",
"The token-based macroF 1 for NERC when using SMXM decreased by 0 .",
"05 when explicit examples are removed.",
"Secondly, to improve the Metathesaurus descriptions we removed negations, made them less abstract, and added explicit examples.",
"The modifications on the UMLS descriptions improve the scores by around 0 .",
"03 on the development set.",
"We used the modified Metathesaurus descriptions for all models in result table 4 and table",
"7. Only around forty minutes have been invested to modify the UMLS annotations without expertise in the biomedical domain, likely leaving much room for improvements.",
"Non-entity class modelling We separately analysed how well the different approaches model the negative class.",
"Results on the development set are reported in Table",
"7. The token-level score of SMXM ca with the class-aware encoding of the negative class outperforms both the independent encoding SMXM ind. as well as the negative class description based encoding SMXM desc.",
"approach significantly on the NERC task, confirming the motivation of this approach.",
"Alternative Class Splits While switching classes between the development and test split resulted in overall similar results (tested on three different splits for MedMentions), reducing the number of training classes and redistributing them on the dev and test splits led to a substantial decrease in performance.",
"Results for the extreme case where the number of training classes for MedMentions-ZS has been reduced to the four most frequent ones (with the dev and test sets having eight and nine classes, respectively) are shown in Table",
"8. As seen, SMXM still performs the best, however, only with a score of 0 .",
"14 .",
"Complexity The complexity of our model's and baselines' encoding step in terms of classes is O ( C ) , with C being the number of test classes (in-cluding the negative class).",
"This is an increase in complexity over O (1) in the traditional scenario, however, during training the gradients are accumulated across the inputs, leading to faster convergence.",
"With varying description lengths for different sources (c.f. table 3), the input sequence length is another important factor to consider regarding the model's efficiency, leading to an overall complexity of O ( CN 2 ) , with N being the length of the input sequence.",
"In our experiments, the runtime with SMXM base was the shortest, followed by BEM (due to the entailment descriptions being much shorter), SMXM, and last MRC.",
"Entity Masking Finally, we study the impact of entity masking in Figure 2.",
"First, we plot the validation F 1 score during training for SMXM and SMXM w/o entity masking using guideline annotations.",
"Second, the training loss of the same models in terms of cross-entropy (i.e. Eq. 8).",
"The top plot shows that SMXM's F 1 score converges more slowly but to a higher value than SMXM's highest value w/o masking by 0 .",
"03 points.",
"The model's validation F 1 w/o entity masking decreases in later iterations, indicating overfitting.",
"We confirmed this by observing a higher validation loss when no masking is used.",
"Interestingly, as seen in the loss plot (bottom), the training loss is much lower when using entity masking.",
"This is likely due to entity masking providing additional implicit supervision to the model: masked tokens cannot be the non-entity class.",
"For these masked tokens the model can focus on the entity classification in isolation which appears to help the model extract more useful supervision signal, as indicated by the higher validation F 1 achieved.",
"When trained with masking, SMXM's training loss closely follows the trend of the validation F 1 , indicating good transfer learning from the model's training objective to the zero-shot evaluation.",
"State-of-the-art approaches to NERC include the bidirectional LSTM-CRF (Lample et al., 2016), and more recently models based on the pre-trained transformer architectures, e.g. BERT (De-vlin et al., 2019).",
"these methods are unsuitable for zero-shot learning, with exception to the explored baselines in this paper (Li et al., 2020; Sun et al., 2020).",
"Apart from NERC, manually defined class descriptions have also been explored for relation classification (Obamuyide and Vlachos, 2018) who pose the task as one of textual entailment.",
"Obeidat et al. (2019) use descriptions for zero-shot NET, however, similar to a previous attempt by Ma et al. (2016), they use the underlying hierarchy to only include unseen classes in the leaves of the hierarchy to reduce the relevant unseen classes to only two or three.",
"The only work on zero-shot word sequence labelling (Rei and Sgaard, 2018) explores the transfer from labels on a sentence level objective (e.g. sentiment analysis) to a token or phrase-based annotation, similar to Tackstrom and McDonald (2011).",
"Guerini et al. (2018) label their approach zero-shot named entity recognition, however, they focus on recognizing unseen entities not entity classes .",
"Finally, Fritzler et al. (2019) focused on few-shot NERC using prototypical networks (Snell et al., 2017).",
"They tested their model in the zero-shot setting, but concluded that their approach is not suitable for zero-shot learning as the results on OntoNotes were too low.",
"This paper explored the task of zero-shot NERC with entity type descriptions to transfer knowledge from observed to unseen classes.",
"We addressed the zero-shot NERC specific challenge that the not-an-entity class is not well defined by proposing a multiclass architecture that uses class-aware encoding to model the negative class.",
"The models were evaluated based on zero-shot adaptations of the OntoNotes and MedMentions dataset.",
"The results show that the proposed model outperforms strong baselines and further indicate that high-quality entity descriptions (i.e. annotation guidelines) are an effective way to transfer knowledge from observed to unseen classes.",
"Future work will aim to incorporate the dependencies between the labels predicted.",
"We thank the anonymous reviewers for their time and effort giving us feedback on our paper.",
"This work was supported by the Engineering and Physical Sciences Research Council Doctoral Training Partnership (EPSRC).",
"Andreas Vlachos is supported by the ERC grant AVeriTeC (GA 865958)."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Crystal Richardson (Karuk) University of California, Davis",
"Richard Hatcher Jr University at Buffalo [email protected]",
"Emily Prud'hommeaux Boston College [email protected]",
"Abstract Languages are classified as low-resource when they lack the quantity of data necessary for training statistical and machine learning tools and models.",
"Causes of resource scarcity vary but can include poor access to technology for developing these resources, a relatively small population of speakers, or a lack of urgency for collecting such resources in bilingual populations where the second language is high-resource.",
"As a result, the languages described as low-resource in the literature are as different as Finnish on the one hand, with millions of speakers using it in every imaginable domain, and Seneca, with only a small-handful of fluent speakers using the language primarily in a restricted domain.",
"While issues stemming from the lack of resources necessary to train models unite this disparate group of languages, many other issues cut across the divide between widely-spoken low-resource languages and endangered languages.",
"In this position paper, we discuss the unique technological, cultural, practical, and ethical challenges that researchers and indigenous speech community members face when working together to develop language technology to support endangered language documentation and revitalization.",
"We report the perspectives of language teachers, Master Speakers and elders from indigenous communities, as well as the point of view of academics.",
"We describe an ongoing fruitful collaboration and make recommendations for future partnerships between academic researchers and language community stakeholders.",
"Say that we have three speech communities, which we will refer to as Elephant, Ocelot, and Coyote.",
"Each community has their own language, which is commonly characterized by language technology researchers as low-resource or under-resourced.",
"Elephant's language is spoken by * Equal contribution.",
"around 10 million first language (L1) speakers and millions more second language (L2) speakers, and it has a standard widely accepted orthography.",
"Ethnologue (Lewis et al., 2015; Eberhard et al., 2021) classifies this language as used in education, work, mass media, and government at the national level.",
"Ocelot's language has about 500,000 L1 speakers and three different orthographic representations.",
"This language is noted in Ethnologue as in vigorous use, with literature in a standardized form being used by some though not yet widespread or sustainable, and is described as an indigenous language.",
"In comparison, the language of Coyote has only a handful of speakers that could be considered fluent L1 speakers, most of whom are elders.",
"The language currently lacks a standard orthography, and in Ethnologue it is classified as endangered (Meek, 2012).",
"Elephant wants to develop spoken language technology primarily to support the use of speech-enabled applications and smartphone features.",
"Ocelot and Coyote also need spoken language technology but their focus is often on using these technologies to support use of the language by the community through language preservation, documentation, and instruction.",
"Creating chatbots or enabling hands-free cellphone use, while appealing, might not currently be a priority for Ocelot and Coyote.",
"With support from local universities and the government, Elephant has started a project to build an automatic speech recognition (ASR) system.",
"Thus far it has collected around 60 hours of audio data gathered from radio stations as well as recordings produced in a studio.",
"To transcribe the recordings in the common orthography, Elephant resorted to online crowd-sourcing platforms.",
"It has gathered additional texts scraped from the web and from newspapers and books; the texts also followed the standard orthography of the language, totaling 3933 several million tokens.",
"All of the preparations required financial resources and time, but they were completed within the course of less than a year.",
"Elephant's language technology development work has received interest and accordingly collaboration offers from potential industry partners, including a company that makes a popular language learning application.",
"In return the company has asked for the language data from the project either to be made public or to be owned by the company.",
"Elephant decided to cooperate.",
"Ocelot wanted to try ASR as well, though it did not know where to start at first and did not at the moment have government support.",
"On the other hand, Ocelot has connections with a professor in Indigenous Studies who has advanced research knowledge of the language and is trusted by the language community.",
"Over the years the professor has collected around 100 hours of recordings produced by various community members.",
"Though there was not yet a standard orthography for the language, some agreement was reached among the community members.",
"Unlike Elephant, Ocelot was not able to use crowd-sourcing platforms since there are not that many people who have literacy in the language.",
"Fortunately, the professor was able to recruit students from the local university who are L1 speakers of the language to transcribe audio data.",
"They also managed to gather digitized texts of 250,000 tokens from available websites and converted them to the same orthography used for the transcriptions.",
"All of these preparations were finished in a few years.",
"When the same company that reached out to Elephant made the same offer, Ocelot said she would think about it.",
"Things were quite different for Coyote.",
"She did not have support from the government or any connections with universities.",
"A professor used to work with elders from the community and collected dozens of hours of recordings from fieldwork sessions throughout the years, but at a certain point he stopped giving the audio data back to the community and there were no additional copies.",
"The few recordings made by anthropologists and linguists in the early 20th century were not yet digitized and remain archived outside the community.",
"In other words there was no available data yet to build speech technology.",
"When the same company offered to make learning applications in exchange for ownership of the language data, Coyote said no because of their prior negative experiences with outside researchers and the government-funded attempted linguicide (Hinton, 1994; Skutnabb-Kangas, 2000) that occurred in their community just a few generations ago.",
"At this point, many members of the Coyote language are reluctant to work with outsiders, even corporations willing to help them.",
"Coyote decided to be in charge of the documentation process herself.",
"Right now there is one community member who is an L2 learner and is studying computer science as a student at a local university.",
"She is trying to learn about how to build ASR tools.",
"At the same time, she is working with internal researchers from Coyote, some of whom do not have formal training on the linguistic aspects of their language.",
"Their data collection process, however, has turned out to face severe challenges.",
"First, recordings are gradually obtained from conversations with the elders, who are kindly working for free.",
"In order to be respectful to the elders' schedules and to make sure they get enough rest during the long fieldwork sessions, the audio collections are ongoing and comparatively much slower than that for the languages of Elephant and Ocelot.",
"Secondly, the lack of standard orthography for the language creates difficulty in choosing reasonable representations.",
"Although the language has one available grammar book that was written back in the 1920s, there are many words in the recordings that do not appear in the grammar.",
"Additionally, having representative community members come to a consensus on a single orthography requires extensive time and discussion.",
"Over the decades various linguists and anthropologists have come up with a total of eight different orthographic representations for the language, but having multiple written orthographies has resulted in several possible pronunciations for one word or one single utterance.",
"Thus, a significant amount of deliberation is required in order to ensure that the speakers from Coyote do not have too much difficulty reclaiming words or sentences from written documentation.",
"Writing is seemingly so common for languages that are widely spoken or studied in the world that people tend to take it for granted; they do not exactly realize the writing is itself a luxury and a form of language technology that is not natural to many oral speech communities (Bird, 2020; Hinton, 1994; Ryon, 2002; Richardson, 2018).",
"time-consuming endeavor.",
"Although the graduate student herself is an advanced L2 learner, many of the other learners do not have the same proficiency.",
"In other words, there are very few people from Coyote who are capable of transcribing recordings of their language, commonly referred to as the tran-scription bottleneck\" (Zahrer et al., 2020; Shi et al., 2021a). Therefore the transcriptions have to be cross-checked through consultations with the elders from the Coyote speech community from time to time. At this pace, over five years, around only 16 hours of recordings for Coyote have been collected and transcribed, among which 6 hours are monolingual narratives and story-telling in the language, while the rest includes a large amount of code-switching with English. Additional written texts for the language were digitized from the grammar book and an available Bible in the language, yielding around 40,000 tokens. 1.2 Training the ASR models Now each of the three speech communities has some training data in their own language. They are ready to train ASR systems. Elephant adopted the deep neural network (DNN) from the popular Kaldi toolkit (Povey et al., 2011). The transcripts of the audio data used as training data and additional written texts were combined to build a language model, which was then applied in the decoding process of the acoustic model. Word error rates (WERs) below 20% were achieved. Elephant also tried the newer end-to-end neural ASR library, ESPNet (Watanabe et al., 2018), which does not necessarily require language models and has been demonstrated to work well for languages with dozens of hours of audio data (e.g., Yoloxchitl Mixtec (Shi et al., 2021a)) or more (e.g. English, Hindi (Khare et al., 2021)); a similarly strong WER was obtained. Ocelot engaged in the same efforts and she was able to obtain WER numbers comparable to those for Elephant. In contrast, when applying the same DNN architecture from Kaldi (with different hyperparameters) to her six hours of monolingual audio data, Coyote was able to derive a WER of just under 40%. Coyote thought that perhaps this had something to do with the language model since it was trained on a very small number of words so she turned to ESPnet, opting for a training scheme without using the language model for decoding. The results were even worse. Similar results were produced using wav2vec-U (Baevski et al., 2021). Even the ASR frameworks touted as particularly successful for low-resource languages yielded disappointing results for Coyote's language. On the other hand, the number 40% means something different for a community of speakers than it does for the research community. A WER this high might not be that impressive for (academic) researchers, but it may be good enough in the meantime; a model trained on the available six hours of audio data can be used to generate transcriptions of new recordings from ongoing fieldwork sessions or untranscribed archival recordings. These transcriptions can then be corrected by L2 speakers, expediting the transcription process and creating new acoustic and textual training data (Prud'hommeaux et al., 2021). For speech communities like Elephant and Ocelot, which are relatively widely-spoken and often benefit from financial support or collaborative bonds with academics and industry partners, it is possible to collect more data, whenever it is needed. For endangered languages like Coyote's, however, it is unlikely that it will ever be possible to gather even dozens of hours of audio data, regardless of financial or time constraints. 2 Academic perspectives As illustrated above, the current research field has coarsely referred both to widely spoken languages lacking an established tradition of natural language processing (NLP) and to endangered indigenous languages as low-resource\" or under-resourced\", without acknowledging or mentioning the drastically different conditions of their data availability. Recent high-profile work on diversity in language technology included, in one case, just a few sentences encouraging researchers to prioritize endangered languages (Blasi et al., 2021), and in an-other case, no discussion of endangered languages at all (Mager et al., 2020). While the language taxonomy based on resource availability proposed in Joshi et al. (2020) is quite impressive, the authors simply grouped all languages that are currently lacking resources into the same category. For instance, the Mixtec language (with different varieties), spoken in Mexico, has around half a million L1 speakers, while the Juruna language in Brazil has less than 300 L1 speakers; yet both were categorized together as still ignored in the aspect of language technology\". Roughly 1,050 of the nearly 30,000 abstracts 3935 Category Description Count Examples Elephant widely spoken, well supported 99 (60.7%) Bengali, Danish, Igbo, Pashto, Tagalog Ocelot fewer speakers, well supported 39 (23.9%) Faroese, Maori, Quechua, Yiddish Coyote few speakers, little support 25 (15.3%) Bribri, Kodi, Mi'kmaq, Veps, Yine Table 1: Number of unique languages named in ACL Anthology abstracts that include the phrases low resource, under resourced or resource constrained language , organized by low-resource language type. available in the ACL Anthology bibliography contain the token phrases low resource, under re-sourced, or resource constrained language.",
"Of these abstracts, about half name a specific language that has been assigned an ISO-639 code.",
"Excluding obviously high-resource languages that are mentioned in these abstracts as a point of comparison, as a source language for transfer learning, or as a source or target for machine translation, we are left with 163 unique languages (considering their dialectal variations) that have been characterized in at least one ACL abstract as low-resource.",
"Table 1 shows the distribution of these 163 languages with similar resource conditions/external support as Elephant, Ocelot, or Coyote respectively, along with a few examples of each language category.",
"We see Elephant languages far outnumber both Ocelots and Coyotes, with Coyotes representing only 15% of the languages identified as low-resource.",
"There is great variability both in the degree and in the nature of the challenges that arise when developing language technologies for languages with scarce training resources.",
"It is crucial that NLP researchers actively acknowledge this variability and avoid giving the impression that models or architectures developed for Elephant will be suitable for Ocelot or Coyote, or vice versa.",
"We can begin by distinguishing languages classified as endangered from those that are not, and provide case-by-case detailed descriptions of the speaker population size and language data availability for the language being investigated.",
"Contingent on that, it is only recently that the academic community has started holding workshops devoted to endangered and indigenous languages, such as ComputEL (Arppe et al., 2021), held four times since 2014, and Amer-icasNLP (Mager et al., 2021), which took place for the first time in 2021.",
"Work published in these and other venues has included research on several NLP tasks that pertain to language documentation and reclamation for endangered languages, from morphological segmentation (Liu et al., 2021; Kann et al., 2018), finite-state morphological analyzers (Lane and Bird, 2020; Lachler et al., 2018), to machine translation (Zhang et al., 2020; Bird and Chiang, 2012) and ASR (Thai et al., 2020; Morris et al., 2021; Shi et al., 2021b).",
"That being said, most of the work has focused on technology development, with relatively little regard for the ways in which the development of language technology for endangered languages might be different from that for languages with few existing resources but a much larger numbers of speakers.",
"Discussions of whether a proposed language technology would be useful for the workflow of the community's own language documentation efforts, or how it would be combined with the community's revitalization and instructional activities are also noticeably lacking, with a few notable exceptions such as the the verb conjugator, Kawen-nn:nis, developed for the Ohsweken dialect of Kanyen'kha 1 (Kazantseva et al., 2018); the online dictionary developed for Hupa 2 , which is currently used for language-related activities in the community; and the Indigenous Languages Technology project at NRC Canada (Kuhn et al., 2020).",
"The NLP community has formally recognized the importance of developing technologies for endangered languages, and we have the tools to support work in this area.",
"Now we must try to answer this question: what priorities and considerations should researchers take into account when developing technology for endangered languages?",
"Often NLP technologies presume that a language has a standardized written form that may act as source or target for various computational tasks (e.g., ASR, machine translation, named-entity recognition).",
"While standardization is typical of languages in most of the W.E.I.R.D societies (Hen-rich et al., 2010), this is atypical for much of the rest of the world.",
"For many endangered language contexts, the tradition of literacy is very recent, and writing is far less privileged than the oral medium.",
"Sociolinguistic research on small languages has identified significant variation, both dialectal and ideolectal (Skilton, 2017), in these contexts as well.",
"Standardization, frequently considered a first task in language documentation and the development of language technology, often tends to run counter to the goals of the speech communities working towards revitalization (Whaley, 2011).",
"For these communities, linguistic variation is not a problem to be solved but an important element of a vital language ecology.",
"A major goal in language revitalization involves facilitating the usage of the endangered language in a wider set of contexts and situations than it is being used in currently.",
"This entails careful forethought into how language tools can assist in this broadening of usage.",
"Providing state-of-the-art language technology to a community that is critically involved in training new speakers and developing new contexts for usage may not be the most efficient use of time and resources.",
"Tools that assist in classroom education and in developing new usage situations are likely to be of much more immediate value to groups involved in revitalization.",
"The number of speakers of a language also impacts the maximum rate at which new data can be collected to add to the resources available for a language.",
"Even for a language with just thousands of speakers, documentation projects can accumulate resources at a pace impossible for a highly-endangered language.",
"In Yoloxchitl Mixtec, for instance, with around 5000 speakers, there exists a speech corpus of over 100 hours of running speech (Mitra et al., 2016).",
"For languages with few speakers capable and comfortable of speaking their language, who are mostly elders and also very occupied with other activities related to the revitalization of the language, the goal of collecting long-duration recordings in the language does not seem feasible or even reasonable.",
"Additionally, while through the decades certain linguistic academic scholars have responsibly and sensitively built trustworthy and collaborative bonds with indigenous communities (Hale, 1992) (e.g., Dr. Ken Hale working with Warlpiri (Hale, 1983) and Navajo (Ross et al., 2002); Dr. David Rood working with Lakhota (Rood and Taylor, 1996); Dr. R. M. W. Dixon working with Australian aboriginal languages (Dixon, 1970)), for many endangered language communities in North America, the attitudes of earlier European-American set-tler\" scholars towards indigenous communities and their languages have engendered distrust in the motives and biases of outside experts (Harvey, 2015).",
"Earlier fieldwork often involved outsider linguists paying speaker consultants to participate in research that was designed and conducted solely by the researcher in what is now referred to as the linguist-focused model (Czaykowska-Higgins, 2009).",
"A more recent trend in linguistics is the movement toward Community-Based Language Research, in which community members collaborate with outsider linguists on the research which they themselves help design.",
"In the development of language technology, providing the speech communities a central role in the design and implementation of language tools may improve the likelihood of the tools' success.",
"While language technology is certainly of significant value from an academic's point of view, is it actually useful to stakeholders in endangered language communities?",
"To learn from community voices, we designed an informal survey (Table 2) and received responses from a total of 23 language teachers coming from four endangered communities.",
"Among them, five are community-designated Master Speakers 3 of the language, two of whom are elders; the rest of the respondents include one young L1 speaker, and either semi-fluent or fluent L2 speakers of their languages.",
"In the survey, we asked for language teach-ers' thoughts on whether they would consider writing to be a technology, and additionally, whether they think writing, morphological segmentation (Cotterell et al., 2016), ASR (Jimerson and Prud'hommeaux, 2018), video processing, and pedagogical learning applications (Bettinson and Bird, 2017), which are all common in the research field of NLP, could be useful for them.",
"Overall, the majority of community language teachers think all of the five technological applications would be helpful but to different extents.",
"For example, most think that having written documentation would be valuable for aspects of language teaching, reclamation, and intergenerational transmission of cultural and linguistic knowledge.",
"In the words of one respondent, Written documentation is useful because a lot of it is old.",
"It captures the 3 Master speakers are indigenous community members who are fluent in their language and have accepted apprentices who study the language with them through the oral tradition (Richardson and Brucell, 1993).",
"way speakers talked and created words.",
"The writing can be harvested, so that we reclaim our words and speak in the old style",
"again.\" However, writing has not been shown to be an acceptable alternative to learning under a Master Speaker through the oral tradition, which requires no written resources; and learning from writing alone is not necessary or sufficient for restoring linguistic and cultural knowledge\".",
"4 With morphological segmentation, most language teachers stated that knowing the morphological structures of words would be informative for them to learn to piece together the meaning of phrases\". Before I attended language pods I had gone to some community language classes, but living far away a lot of my language learning came through reading the dictionary. I thought that having learned the morphologies of words because of how the dictionary shows those word parts was extremely helpful for me in order to not only learn those words but also be able to figure out how to create other ones without checking the dictionary.\"",
"In the case of ASR, some language teachers thought that it would be an interesting idea\"; they could see themselves reviewing the transcripts\" and the technology would help with sound recognition and",
"pronunciation.\"; these transcripts, however, would not be as effective as listening to the audio\".",
"Others expressed strong feelings against ASR, saying that To me this is just a way for linguists to secure funding for themselves and their tech project, which takes money and resources away from speech communities.",
"This kind of thing is not language revitalization, as it doesn't create 4 We acknowledge that different indigenous speech communities have different perspectives on these matters.",
"Writing alone is extremely useful for speech communities with no Master Speakers left.",
"We note particularly the Breath of Life (BOL) speech communities that reclaim their languages from written documentation; the written documentation helps BOL community researchers become the next generation of Master Speakers.",
"In other words, writing is not the end goal in the language learning process for these indigenous communities.",
"new speakers.",
"It generates new texts so they can sit on the shelf of some archive.",
"Not helpful\". Almost all language teachers favor automatic processing of video materials. They mentioned this technology is of great value because (it) captures the authenticity of the Speaker/Apprentice\"; others said they would use videos to watch the elders' mouths when they speak as well as their facial expressions.",
"Intonations and body expression really add to conversations.",
"Those things can be lost when just reading, writing or even listening.",
"Seeing the facial expressions and body language are important to understand the contexts of how specific words are being used\". At last, regarding pedagogical applications, community members suggested that they could be beneficial given that the applications could provide repetition, drill and practice. This would increase learner confidence in their own adequate exposure to the language (in a non-threatening manner). It would allow the learner to understand how the language works, so that new speech beyond the app could be developed by the learner\".",
"They indicated, however, that for the applications to work , people would have to actually use them\".",
"The observations from this informal survey might be surprising to researchers, who typically consider language technology to be broadly useful and beneficial to all people.",
"An awareness that is lacking in the research field, however, is that the purpose of language technologies and their development process might vary significantly when applied to indigenous and endangered languages.",
"For many of these languages, the priorities of the speech communities are how to more effectively document, teach, and reclaim their language; how to save the cultural heritage passed down from the elders; and how to let their language have a voice among other widely-spoken or dominant languages.",
"To linguists, the Karuk speech community is a critically endangered language community of North California.",
"To speech community members, the Karuk language is a vital language with approximately 25 speakers, including five Master Speakers and other language teachers, archivists and activists.",
"The Karuk language community is one which is thriving, though surviving through grass roots revitalization with very little infrastructure at the tribal language program level.",
"When it comes to modern language revitalization, data cannot and should not be separated from the Master Speakers.",
"Their experience of government policies aimed at linguicide (Hinton, 1994; Skutnabb-Kangas, 2000) and their sense of loss of their mother tongue, as L1 speakers become more and more scarce, are realities that new speakers and field linguists need to acknowledge.",
"For speakers of the Karuk language, these issues come through in the community's internal documentation of their language.",
"For instance, one Master Speaker of the language explained the loss of speakers when he said:",
"(t)hose were all my friends.",
"That's what I was telling [the nurse].",
"I said I got a lot on my mind.",
"I said, I sit here all by myself and I'm thinking about all the people that left me. I said, it's kinda, you don't feel good I mean, you know, when you think about them.",
"You're not supposed to, they're gone.",
"Xatik, let it go.",
"But I just can't help it.",
"I think about all the funny things we did together, laughing and talking\" 5 . His words reflect the culture of language loss as it occurs on the human level. Sometimes this reality stops potential Master Speakers from working with linguists and young speakers eager to reclaim their language/identity. It is clear that when working with elderly Master Speakers, methodologies must include space for the elders to vent their grief. Only after this is attended to, can the language be learned by young language workers striving to create a future where the language is worth more than the losses felt. The young people and Master Speakers who attend to this re-envisioning of an indigenous future with endangered languages in perpetuity, are the cornerstone of hope and healing (Whalen et al., 2016; Hinton, 2013; Leonard, 2011). One speaker discusses her view of indigenous second language acquisition (SLA): Reading outta 5 All quotes in this paper were initially documented by an author of this paper during her fieldwork sessions. them books is OK. You can probably gather a lot. Yeah, you guys work hard at it. I've been trying to get all these people together. At least once a month all the speakers should work together. You can just gather one day, all the language speakers, you know? And start talking the language. A lot of [speakers] get enough [language] so they can teach... But they're not really getting all of it. If we're all together, maybe you had something that you wanted to say and you didn't know the words, you could ask somebody. Somebody would know. In other words, help one another\". Here we come to understand that linguistic documentation is useful for language reclamation, and writing them books is a fruitful task (Grenoble, 2017; Rigney, 1999).",
"But one theme that emerges from documentation of elderly Master Speakers of the Karuk language is that writing cannot replace the value of speech communities coming together and speaking their language, continuing the oral tradition.",
"Written data doesn't save a language; it safeguards knowledge.",
"The ultimate goal of endangered language communities is to someday house that same knowledge in the hearts and minds of their members.",
"What does a healthy and trustworthy workflow look like when bringing together an endangered indigenous language community with the (academic) research community?",
"After gaining perspectives from academics as well as language teachers and elders from indigenous endangered language communities, here we describe an ongoing workflow devoted to developing a morphological parser for the Cayuga language.",
"With approximately 50 L1 elder speakers and an ever-growing number of L2 speakers, the Cayuga language fits the description of a highly or critically endangered language.",
"Community-based revitalization projects include an immersion language preschool, adult language courses, as well as teacher-training programs at the local Polytechnic.",
"Two authors of this paper are participants in this project, and both are linguists in academia.",
"Specifically, one of them has years of connections with the Cayuga speech community and advanced research knowledge in the language, while the other has extensive training in computational linguistics but no initial connections with the speech community.",
"is ongoing.",
"The overall workflow is simple and straightforward.",
"First, the author known by the Cayuga speech community introduced the other author to the community.",
"They described the general idea for the project and mentioned that if they were to carry out the project, they would begin with words already found in the published grammar.",
"While morphological segmentation is of interest to linguists, and morphological supervision has potential utilities for certain NLP tasks such as dependency parsing (Seeker and etinoglu, 2015) and bilingual word alignment (Eyigz et al., 2013), in this case, the main goal was to ask whether community members would find morphological segmentation useful for their own language teaching and documentation.",
"Community members mentioned that explicitly teaching students various inflectional elements of complex verbs, segmenting them, and in some instances color-coding morphemes have been useful for students to learn verbal arguments.",
"Second, after securing the go-ahead from community members, the two authors have been meeting almost every week for an hour to discuss progress.",
"The author with extensive research knowledge of the language manually performs morphological segmentation of around 50 words every week.",
"In particular, he provides annotations of both surface segmentation and canonical segmentation (Cotterell et al., 2016).",
"The former is to be later incorporated into the workflow for building ASR systems for the language using recordings already collected; the latter has the objective of gaining a better understanding of the language from a linguistic perspective.",
"The key difference between these two types of segmentation is that for surface segmentation, the concatenation of the individual morphemes stays true to the initial (orthographic) representation of the word (e.g., onadowa:doh on-adowa:d-oh ; the word means they are hunting ), whereas for canonical segmentation, decomposing a word into its component morphemes involves the addition and/or deletion of characters (or phonemes) in order to outline the orthographic or phonological changes during the word formation process (e.g., onadowa:doh yodi-adowad-oh ).",
"With the new words annotated every week, the author with a computational background trains segmentation models in an iterative fashion, by combining the words of the current week with those from previous weeks to construct a data set for model training and evaluation.",
"Model performance, indexed by F1 score, is recorded weekly.",
"As of now we have annotations for 262 words.",
"The F1 scores for both surface and canonical segmentation approximate 50%.",
"Our follow-up step is to train models using all these words, then apply them to new data that has not yet been annotated to enhance and accelerate manual annotation.",
"Once the F1 scores reach around 75%, we plan to report back to the community, inform them about where things are, and discuss details of incorporating our research output with their own language work.",
"Considering academic output on endangered languages more holistically, we conclude that there are not enough narratives about the process of working with the community.",
"Academic (NLP) researchers working on indigenous languages, particularly endangered languages like Karuk and Cayuga that have historically been suppressed, should take the following steps when planning projects, describing their research, and collaborating with stakeholders in the speech communities: (1) Make efforts to actually know the indigenous speech communities and build meaningful bonds with them.",
"For instance, fieldwork researchers, when possible, should try to train young community internal researchers to document the language, if they have aspirations to become language teachers; training them can help the speech communities increase longevity and sustainability.",
"Academic researchers should continue to assist the community that they have worked with when Master Speakers are no longer able to participate in language documentation.",
"Assistance might involve writing dictionaries that would later be given to the community, or helping heritage language learners with language revitalization and reclamation.",
"(2) Consider that speakers and community researchers should be offered the opportunity to be co-authors on work they made meaningful contribution to, and/or be listed as contributors in appropriate ways.",
"(3) Describe the data collection protocols followed and challenges faced in research output.",
"Be attentive and respectful to indigenous community members' schedules, needs (e.g., elders 3940 might need to take medications during fieldwork sessions), and perspectives.",
"(4) Speak clearly about plans for the sharing, archiving, and storing of the data (Rigney, 2006).",
"In particular, make sure to be aware that Master Speakers want at the very least co-use copyright over all data which shall be inherited by their descendants.",
"In addition, physical copies of all data should be given to Master Speakers, and copies should be submitted to tribal archives or archivists.",
"The only exception to this rule occurs when Master Speakers ask for their data to be edited before being made public to remove gossip or culturally sensitive material before making copies available.",
"(5) Create language technologies together in consultation with speech communities in order to ensure their usefulness to language programs.",
"The developed technology needs to be accessible to community language workers.",
"(6) Discuss concrete plans for how technology output can be incorporated into the documentation and revitalization work of the speech communities.",
"For instance, a morphological parser needs to visualize morphemes and word construction in such a way as to be a valuable teaching tool for speech community members; ASR systems should not require data extraction or facilitate data ownership by community-external researchers or corporations.",
"Lastly, each perspective and motivation for indigenous language documentation has value and is worth recording.",
"We hope our work will encourage academics to focus on prioritizing the needs and preferences of endangered speech communities when working with them to develop technologies for their languages.",
"In particular, academics must keep in mind the relationship many endangered language communities have with their languages.",
"One Master Speaker of the Karuk language captured this nature of this relationship when he said",
"(t)he Karuk language is a canoe.",
"It holds all of our baskets, our regalia, our materials, our food.",
"The canoe holds all our practices, songs and stories.",
"It holds all our people and all the Karuk people yet to be born.",
"The canoe carries us all; without it, we can't get anywhere (Richardson, 2018).",
"We would like to thank the speech communities of Karuk, Gayogoho:no' (Cayuga), Kanienkeha (Mohawk), and Ondowa'ga:' (Seneca).",
"With the Karuk speech community, we are especially grateful to the following honored Elders who have passed away: Junie Donahue, Vina Smith, Charlie Thom, and Sonny Davis; we are also grateful to the current Master Speakers of the community who provided us with their invaluable insights: Julian Lang, Phil Albers Jr., Nancy Steele, and Susan Gehr; lastly, we want to thank the dedicated language teachers of the community: Florrine Super, Tamara Alexander, Lulu Alexander, Jason Hocka-day.",
"Within the Gayogoho:no' speech community, we are especially grateful to Gasenneeyoh Crawford, Renae Hill, Amos Keye, and Sose Smith.",
"This material is based upon work supported by the National Science Foundation under Grant #2127309 to the Computing Research Association for the CIFellows Project, and Grant #1761562.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the",
"author(s) and do not necessarily reflect the views of the National Science Foundation nor the Computing Research Association.",
"The data from the informal survey described in this paper were collected from language teachers and speakers from four indigenous speech communities of North America.",
"The survey is language-agnostic in the sense that it could be used and expanded to other indigenous speech communities as well.",
"We hope that the ethical challenges outlined here will motivate other researchers working on language technology to also be aware of the differences between languages in low-resource settings\" but with much larger speaker populations, and indigenous endangered languages; and to be attentive to the needs of the speech communities of the latter. References Antti Arppe, Jeff Good, Atticus Harrigan, Mans Hulden, Jordan Lachler, Sarah Moeller, Alexis Palmer, Mi-ikka Silfverberg, and Lane Schwartz, editors. 2021. Proceedings of the 4th Workshop on the Use of Computational Methods in the Study of Endangered Languages Volume 1 (Papers) . Association for Computational Linguistics, Online. 3941 Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised speech recognition. arXiv preprint arXiv:2105.11084 . Mat Bettinson and Steven Bird. 2017. Developing a suite of mobile applications for collaborative language documentation. In Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages , pages 156164. Steven Bird. 2020. Decolonising speech and language technology. In Proceedings of the 28th International Conference on Computational Linguistics , pages 35043519, Barcelona, Spain (Online). International Committee on Computational Linguistics. Steven Bird and David Chiang. 2012. Machine translation for language preservation. In Proceedings of COLING 2012: Posters , pages 125134, Mumbai, India. The COLING 2012 Organizing Committee. Damin Blasi, Antonios Anastasopoulos, and Graham Neubig. 2021. Systematic inequalities in language technology performance across the world's languages. arXiv preprint arXiv:2110.06733 . Ryan Cotterell, Tim Vieira, and Hinrich Schtze. 2016. A joint model of orthography and morphological segmentation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 664669, San Diego, California. Association for Computational Linguistics. Ewa Czaykowska-Higgins. 2009. Research models, community engagement, and linguistic fieldwork: Reflections on working within canadian indigenous communities. Language documentation & conservation , 3(1):182215. Robert Malcolm Ward Dixon. 1970. Proto-australian laminals. Oceanic Linguistics , pages 79103. David M. Eberhard, Gary F. Simons, and Charles D. Fennig. 2021. Ethnologue: Languages of the World, Twenty-fourth edition . SIL International. Elif Eyigz, Daniel Gildea, and Kemal Oflazer. 2013. Simultaneous word-morpheme alignment for statistical machine translation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 3240, Atlanta, Georgia. Association for Computational Linguistics. Lenore A. Grenoble. 2017. Producing language reclamation by decolonising language'. In Wesley Leonard and Haley De Korne, editors, The Cambridge Handbook of Endangered Languages , volume 14, pages 1536. London: EL Publishing. Ken Hale. 1983. Warlpiri and the grammar of non-configurational languages. Natural Language & Linguistic Theory , 1(1):547. Ken Hale. 1992. Endangered languages: On endangered languages and the safeguarding of diversity. language , 68(1):142. Sean P. Harvey. 2015. Native Tongues: Colonialism and Race from Encounter to the Reservation . Harvard University Press, Cambridge, Massachusetts. Joseph Henrich, Steven J Heine, and Ara Norenzayan. 2010. The weirdest people in the world? Behavioral and brain sciences , 33(2-3):6183. Leanne Hinton. 1994. Flutes of Fire: Essays on California Indian Languages. ERIC. Leanne Hinton. 2013. Bringing our languages home: Language revitalization for families . Heyday Books. Robbie Jimerson and Emily Prud'hommeaux. 2018. ASR for documenting acutely under-resourced indigenous languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) , Miyazaki, Japan. European Language Resources Association (ELRA). Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 62826293. Katharina Kann, Jesus Manuel Mager Hois, Ivan Vladimir Meza-Ruiz, and Hinrich Schtze. 2018. Fortification of neural morphological segmentation models for polysynthetic minimal-resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) , pages 4757. Anna Kazantseva, Owennatekha Brian Maracle, Ronkwe'tiyhstha Josiah Maracle, and Aidan Pine. 2018. Kawennn:nis: the Wordmaker for Kanyen'kha. In Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages , pages 5364, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Shreya Khare, Ashish Mittal, Anuj Diwan, Sunita Sarawagi, Preethi Jyothi, and Samarth Bharadwaj. 2021. Low resource ASR: The surprising effectiveness of high resource transliteration. In The Annual Conference of the International Speech Communication Association (Interspeech) , pages 15291533. Roland Kuhn, Fineen Davis, Alain Dsilets, Eric Joa-nis, Anna Kazantseva, Rebecca Knowles, Patrick Littell, Delaney Lothian, Aidan Pine, Caroline Running Wolf, Eddie Santos, Darlene Stewart, Gilles Boulianne, Vishwa Gupta, Brian Maracle Owen-natkha, Akwiratkha' Martin, Christopher Cox, Marie-Odile Junker, Olivia Sammons, Delasie Torko-rnoo, Nathan Thanyehtnhas Brinklow, Sara Child, Benot Farley, David Huggins-Daines, Daisy Rosenblum, and Heather Souter. 2020. The Indigenous 3942 Languages Technology project at NRC Canada: An empowerment-oriented approach to developing language software. In Proceedings of the 28th International Conference on Computational Linguistics , pages 58665878, Barcelona, Spain (Online). International Committee on Computational Linguistics. Jordan Lachler, Lene Antonsen, Trond Trosterud, Sjur Moshagen, and Antti Arppe. 2018. Modeling Northern Haida verb morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) , Miyazaki, Japan. European Language Resources Association (ELRA). William Lane and Steven Bird. 2020. Bootstrapping techniques for polysynthetic morphological analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 66526661. Wesley Leonard. 2011. Challenging extinction\" through modern Miami language practices."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other"
] |
[
"Data augmentation is an effective solution to data scarcity in low-resource scenarios.",
"However, when applied to token-level tasks such as NER, data augmentation methods often suffer from token-label misalignment, which leads to unsatsifactory performance.",
"In this work, we propose Masked Entity Language Modeling (MELM) as a novel data augmentation framework for low-resource NER.",
"To alleviate the token-label misalignment issue, we explicitly inject NER labels into sentence context, and thus the fine-tuned MELM is able to predict masked entity tokens by explicitly conditioning on their labels.",
"Thereby, MELM generates high-quality augmented data with novel entities, which provides rich entity regularity knowledge and boosts NER performance.",
"When training data from multiple languages are available, we also integrate MELM with code-mixing for further improvement.",
"We demonstrate the effectiveness of MELM on monolingual, cross-lingual and multilingual NER across various low-resource levels.",
"Experimental results show that our MELM presents substantial improvement over the baseline methods.",
"1 1 Introduction Named entity recognition (NER) is a fundamental NLP task which aims to locate named entity mentions and classify them into predefined categories.",
"As a subtask of information extraction, it serves as a key building block for information retrieval (Banerjee et al., 2019), question answering (Fabbri et al., 2020) and text summarization systems (Nallapati et al., 2016) etc.",
"However, except a few high-resource languages / domains, the majority of languages / domains have limited amount Ran Zhou is under the Joint Ph.D.",
"Since manually annotating sufficient labeled data for each language / domain is expensive, low-resource NER (Cotterell and Duh, 2017; Feng et al., 2018; Zhou et al., 2019; Rijhwani et al., 2020) has received increasing attention in the research community over the past years.",
"As an effective solution to data scarcity in low-resource scenarios, data augmentation enlarges the training set by applying label-preserving transformations.",
"Typical data augmentation methods for NLP include (1) word-level modification (Wei and Zou, 2019; Kobayashi, 2018; Wu et al., 2019; Kumar et al., 2020) and (2) back-translation (Sennrich et al., 2016; Fadaee et al., 2017; Dong et al., 2017; Yu et al., 2018).",
"Despite the effectiveness on sentence-level tasks, they suffer from the token-label misalignment issue when applied to token-level tasks like NER.",
"More specifically, word-level modification might replace an entity with alternatives that mismatch the original label.",
"Back-translation creates augmented texts that largely preserve the original content.",
"However, it hinges on external word alignment tools for propagating the labels from the original input to the augmented text, which has proved to be error-prone.",
"To apply data augmentation on token-level tasks, Dai and Adel (2020) proposed to randomly substitute entity mentions with existing entities of the same class.",
"They avoided the token-label misalignment issue but the entity diversity does not increase.",
"Besides, the substituted entity might not fit into the original context.",
"Li et al. (2020a) avoided the token-label misalignment issue by only diversifying the context, where they replaced context (having O' label) tokens using MASS (Song et al., 2019) and left the entities (i.e. aspect terms in their task) completely unchanged.",
"However, according to the NER evaluations in Lin et al. (2020), augmentation on context gave marginal improvement on pretrained-LM-based NER models.",
"Our preliminary results on low-resource NER (see Figure 1) also demonstrate that diversifying entities in the training data is more effective than introducing more context patterns.",
"Inspired by the aforementioned observations, we propose Masked Entity Language Modeling (MELM) as a data augmentation framework for low-resource NER, which generates augmented data with diverse entities while alleviating the challenge of token-label misalignment.",
"MELM is built upon pretrained Masked Language Models (MLM), and it is further finetuned on corrupted training sentences with only entity tokens being randomly masked to facilitate entity-oriented token replacement.",
"Simply masking and replacing entity tokens using the finetuned MLM is still insufficient because the predicted entity might not align with the original label.",
"Taking the sentence shown in Figure 2b as an example, after masking the named entity European Union (Organization), the finetuned MLM could predict it as Washington has.",
"Such prediction fits the context but it is not aligned with the original labels.",
"To alleviate the misalignment, our MELM additionally introduces a labeled sequence linearization strategy, which respectively inserts one label token before and after each entity token and regards the inserted label tokens as the normal context tokens during masked language modeling.",
"Therefore, the prediction of the masked token is conditioned on not only the context but the entity's label as well.",
"After injecting label information and finetuning on the label-enhanced NER data, our MELM can exploit rich knowledge from pre-training to increase entity diversity while greatly reducing token-label misalignment.",
"Code-mixing (Singh et al., 2019; Qin et al., 2020; Zhang et al., 2021) achieved promising results by creating additional code-mixed samples using the available multilingual training sets, which is particularly beneficial when the training data of each language is scarce.",
"Fortunately, in the scenarios of multilingual low-resource NER, our MELM can also be applied on the code-mixed examples for further performance gains.",
"We first apply code-mixing by replacing entities in a source language sentence with the same type entities of a foreign language.",
"However, even though token-label alignment is guaranteed by replacing with entities of the same type, the candidate entity might not best fit into the original context (for example, replacing a government department with a football club).",
"To solve this problem, we propose an entity similarity search algorithm based on bilingual embedding to retrieve the most semantically similar entity from the training entities in other languages.",
"Finally, after adding language markers to the code-mixed data, we use them to fine-tune MELM for generating more code-mixed augmented data.",
"To summarize, the main contributions of this paper are as follows: (1) we present a novel framework which jointly exploits sentence context and entity labels for entity-based data augmentation.",
"It consistently achieves substantial improvement when evaluated on monolingual, cross-lingual, and multilingual low-resource NER; (2) the proposed labeled sequence linearization strategy effectively alleviates the problem of token-label misalignment during augmentation; (3) an entity similarity search algorithm is developed to better bridge entity-based data augmentation and code-mixing in multilingual scenarios.",
"Fig. 2c presents the work flow of our proposed data augmentation framework.",
"We first perform labeled sequence linearization to insert the entity label tokens into the NER training sentences (Sec-tion 2.1).",
"Then, we fine-tune the proposed MELM on linearized sequences (Section 2.2) and create augmented data by generating diverse entities via 2252",
"The augmented data undergoes post-processing (Section 2.4) and is combined with the original training set for training the NER model.",
"Algorithm 1 gives the pseudo-code for the overall framework.",
"Under multilingual scenarios, we propose an entity similarity search algorithm as a refined code-mixing strategy (Section 2.5) and apply our MELM on the union set of gold training data and code-mixed data for further performance improvement.",
"To minimize the amount of generated tokens incompatible with the original labels, we design a labeled sequence linearization strategy to explicitly take label information into consideration during masked language modeling.",
"Specifically, as shown in Figure 2c, we add the label token before and after each entity token and treat them as normal context tokens.",
"The yielded linearized sequence is utilized to further finetune our MELM so that its prediction is additionally conditioned on the inserted label tokens.",
"Note that, we initialize the embeddings of label tokens with those of tokens semantically related to the label names (e.g., organization for B-ORG ).",
"By doing so, the linearized sequence is semantically closer to a natural sentence and the difficulty of finetuning on linearized sequence could be reduced (Kumar et al., 2020).",
"Unlike MLM, only entity tokens are masked during MELM fine-tuning.",
"At the beginning of each finetuning epoch, we randomly mask entity tokens in the linearized sentence X with masking ratio .",
"Then, given the corrupted sentence X as input, our MELM is trained to maximize the probabilities of the masked entity tokens and reconstruct the linearized sequence X : max log p ( X | X ) n (cid:88) i =1 m i log p ( x i | X ) (1) where represents the parameters of MELM, n is the number of tokens in X , x i is the original token in X , m i = 1 if x i is masked and otherwise m i = 0 .",
"Through the above fine-tuning process, the proposed MELM learns to make use of both contexts and label information to predict the masked entity tokens.",
"As we will demonstrate in Section 4.1, the predictions generated by the finetuned MELM are significantly more coherent with the original entity label, compared to those from other methods.",
"To generate augmented training data for NER, we apply the fine-tuned MELM to replace entities in the original training samples.",
"Specifically, given a corrupted sequence, MELM outputs the probability of each token in the vocabulary being the masked entity token.",
"However, as the MELM is fine-tuned on the same training set, directly picking the most probable token as the replacement is likely to return the masked entity token in the original training sample, and might fail to produce a novel augmented sentence.",
"Therefore, we propose to randomly sample the replacement from the top k most probable components of the probability distribution.",
"Formally, given the probability distribution 2253 Algorithm 1 Masked Entity Language Modeling (MELM) Given D train , M Given gold traning set D train and pretrained MLMMD masked , D aug for { X, Y } D train do X LINEARIZE ( X, Y ) Labeled sequence linearization X FINETUNEMASK ( X, ) Randomly mask entities for fine-tuning D masked D masked { X } end for M finetune FINETUNE ( M , D masked ) Fine-tune MELM on masked linearized sequences for { X, Y } D masked do repeat R times : X LINEARIZE ( X, Y ) Labeled sequence linearization X GENMASK ( X, ) Randomly mask entities for generation X aug RANDCHOICE ( M finetune ( X ) , Top k = 5) Generate augmented data with fine-tuned MELM D aug D aug { X aug } end for D aug POSTPROCESS ( D aug ) Post-processing return D train D aug P ( x i | X ) for a masked token, we first select a set V ki V of the k most likely candidates.",
"Then, we fetch the replacement x i via random sampling from V ki .",
"After obtaining the generated sequence, we remove the label tokens and use the remaining parts as the augmented training data.",
"For each sentence in the original training set, we repeat the above generation procedure R rounds to produce R augmented examples.",
"To increase the diversity of augmented data, we adopt a different masking strategy from train time.",
"For each entity mention comprising of n tokens, we randomly sample a dynamic masking rate from Gaussian distribution N ( , 2 ) , where the Gaussian variance 2 is set as 1 /n 2 .",
"Thus, the same sentence will have different masking results in each of the R augmentation rounds, resulting in more varied augmented data.",
"To remove noisy and less informative samples from the augmented data, the generated augmented data undergoes post-processing.",
"Specifically, we train a NER model with the available gold training samples and use it to automatically assign NER tags to each augmented sentence.",
"Only augmented sentences whose predicted labels are consistent with the their original labels are kept.",
"The post-processed augmented training set D aug is combined with the gold training set D train to train the final NER tagger.",
"the proposed MELM on language-specific data for performance improvement.",
"Nevertheless, it offers higher potential to enable MELM on top of code-mixing techniques, which proved to be effective in enhancing multilingual learning (Singh et al., 2019; Qin et al., 2020; Zhang et al., 2021).",
"In this paper, with the aim of bridging MELM augmentation and code-mixing, we propose an entity similarity search algorithm to perform MELM-friendly code-mixing.",
"Specifically, given the gold training sets { D train | L } over a set L of languages, we first collect label-wise entity sets E ,y , which consists of the entities appearing in D train and belonging to class y .",
"To apply code-mixing on a source language sentence X src , we aim to substitute a mentioned entity e of label y with a target language entity e sub E tgt ,y , where the target language is sampled as tgt U ( L \\ { src } ) .",
"Instead of randomly selecting e sub from E tgt ,y , we choose to retrieve the entity with the highest semantic similarity to e as e sub .",
"Practically, we introduce MUSE bilingual embeddings (Conneau et al., 2017) and calculate the entity's embedding Emb ( e ) by averaging the embeddings of the entity tokens: Emb ( e ) = 1 | e | | e | (cid:88) i =1 MUSE src , tgt ( e i ) (2) where MUSE src , tgt denotes the src tgt aligned embeddings and e i is the i -th token of e .",
"Next, we obtain the target-language entity e sub semantically closest to e as follows: e sub = argmax e E tgt ,y f ( Emb ( e ) , Emb ( e )) (3) 2254 f ( , ) here is the cosine similarity function.",
"The output entity e sub is then used to replace e to create a code-mixed sentence more suitable for MELM augmentation.",
"To generate more augmented data with diverse entities, we further apply MELM on the gold and code-mixed data.",
"Since the training data now contains entities from multiple languages, we also prepend a language marker to the entity token to help MELM differentiate different languages, as shown in Figure 3.",
"To comprehensively evaluate the effectiveness of the proposed MELM on low-resource NER, we consider three evaluation scenarios: monolingual , zero-shot cross-lingual and multilingual low-resource NER.",
"We conduct experiments on CoNLL NER dataset (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) of four languages where L = {English (En), German (De), Spanish (Es), Dutch (Nl)}.",
"For each language L , we first sample N sentences from the full training set as D ,N train , where N { 100 , 200 , 400 , 800 } to simulate different low-resource levels.",
"For a realistic data split ratio, we also downscale the full development set to N samples as D ,N dev .",
"The full test set for each language is adopted as D test for evaluation.",
"For monolingual experiments on language with low-resource level N { 100 , 200 , 400 , 800 } , we use D ,N train as the gold training data, D ,N dev as the development set and D test as the test set.",
"For zero-shot cross-lingual experiments with low-resource level N { 100 , 200 , 400 , 800 } , we use D En ,N train as the source language gold training data, D En ,N dev as the development set and D De test , D Es test and D Nl test as target language test sets.",
"Under multilingual settings where N training data from each language is available ( N { 100 , 200 , 400 } ) , we use (cid:83) LD ,N train as the gold training data, (cid:83) LD ,N dev as the development set and evaluate on D Entest , D De , test , D Estest and D Nltest , respectively.",
"MELM Fine-tuning We use XLM-RoBERTa-base (Conneau et al., 2020) with a language-modeling head to initialize MELM parameters.",
"MELM is fine-tuned for 20 epochs using Adam optimizer (Kingma and Ba, 2015) with batch size set to 30 and learning rate set to 1 e 5 .",
"NER Model We use XLM-RoBERTa-Large (Conneau et al., 2020) with CRF head (Lample et al., 2016) as the NER model for our experiments 2 .",
"We adopt Adamw optimizer (Loshchilov and Hutter, 2019) with learning rate set to 2 e 5 and set batch size to 16.",
"The NER model is trained for 10 epochs and the best model is selected according to dev set performance.",
"The trained model is evaluated on test sets and we report the averaged Micro-F1 scores over 3 runs.",
"Hyperparameter Tuning The masking rate in MELM fine-tuning, the Gaussian mean for MELM generation and the number of MELM augmentation rounds R are set as 0.7, 0.5 and 3, respectively.",
"All of these hyperparameters are tuned on the dev set with grid search.",
"Details of the hyperparameter tuning can be found in Appendix A.1 3.3 Baseline Methods To elaborate the effectiveness of the proposed MELM, we compare it with the following methods: Gold-Only The NER model is trained on only the original gold training set.",
"Label-wise Substitution Dai and Adel (2020) randomly substituted named entities with existing entities of the same entity type from the original training set.",
"MLM-Entity We randomly mask entity tokens and directly utilize a pretrained MLM for data augmentation without fine-tuning and labeled sequence linearization as used in MELM.",
"The prediction of a masked entity token does not consider label information but solely relies on the context words.",
"DAGA Ding et al. (2020) firstly linearized NER labels into the input sentences and then use them to train an autoregressive language model.",
"The language model was used to synthesize augmented 2 https://github.com/allanj/pytorch_ neural_crf 2255 data from scratch, where both context and entities are generated simultaneously.",
"MulDA Liu et al. (2021) fine-tuned mBART(Liu et al., 2020) on linearized multilingual NER data to generate augmented data with new context and entities.",
"As illustrated on the left side of Table 1, the proposed MELM consistently achieves the best averaged results across different low-resource levels, demonstrating its effectiveness on monolingual NER.",
"Compared to the best-performing baselines, our MELM obtains 6.3, 1.6, 1.3, 0.38 absolute gains on 100, 200, 400 and 800 levels, respectively.",
"Cross-lingual NER results are shown on the right side of Table",
"2. Again, on each of the designed low-resource levels, our MELM is superior to baseline methods in terms of the averaged F1 scores.",
"We also notice that, given 100 Nl training samples, the Gold-Only method without data augmentation almost fails to converge while the monolingual F1 of our MELM reaches 66.6, suggesting that data augmentation is crucial for NER when the annotated training data is extremely scarce.",
"To assess the efficacy of the proposed labeled sequence linearization (Section 2.1), we directly fine-tune MELM on masked sentences without linearization (as shown in Figure 2b), denoted as MELM w/o linearize in Table",
"1. We observe a considerable performance drop compared with MELM, which proves the label information injected via linearization indeed helps MELM differentiate different entity types, and generate entities compatible with the original label.",
"Taking a closer look at the baseline methods, we notice that the monolingual performance of Label-wise is still unsatisfactory in most cases.",
"One probable reason is that only existing entities within the training data are used for replacement and the entity diversity after augmentation is not increased.",
"Moreover, randomly sampling an entity of the same type for replacement is likely to cause incompatibility between the context and the entity, yielding a noisy augmented sample for NER training.",
"Although MLM-Entity tries to mitigate these two issues by employing a pretrained MLM to generate novel tokens that fit into the context, the generated tokens might not be consistent with the original labels.",
"Our MELM also promotes the entity diversity of augmented data by exploiting pretrained model for data augmentation.",
"In the meantime, equipped with the labeled sequence linearization strategy, MELM augmentation is explicitly guided by the label information and the token-label misalignment is largely alleviated, leading to superior results in comparison to Lable-wise and MLM-Entity.",
"We also compare with DAGA (Ding et al., 2020), which generates augmented data from scratch using an autoregressive language model trained on gold NER data.",
"Although DAGA is competitive on low-resource levels of 400 and 800, it still under-performs the proposed MELM by a large margin when the training size reduces to 100 or 200.",
"We attribute this to the disfluent and ungrammatical sentences generated from the undertrained language model.",
"Instead of generating augmented data from scratch, MELM focuses on modifying entity tokens and leave the context unchanged, which guarantees the quality of augmented sentences even under extremely low-resource settings.",
"For multilingual low-resource NER, we firstly directly apply MELM on the concatenation of training sets from multiple languages.",
"As shown in Table 2, MELMgold achieves substantial improvement over the Gold-only baseline, which is consistent with monolingual and cross-lingual results.",
"We compare with MulDA (Liu et al., 2021) as a baseline data augmentation method.",
"MulDA generates augmented data autoregressively with an mBART model, which is fine-tuned on NER data with inserted label tokens.",
"At the low-resource levels in our experimental settings, MulDA is less effective and even leads to deteriorated performance.",
"The unsatisfactory performance mainly results from the discrepancy between pretraining and fine-tuning due to the inserted label tokens.",
"Given very few training samples, it is difficult to adapt mBART to capture the distribution of the inserted label tokens, and thus MulDA struggles to generate fluent and grammatical sentences from scratch.",
"In comparison, our proposed method preserves the original context and introduce less syntactic noise in the augmented data.",
"To further leverage the benefits of code-mixing in multilingual NER, we experiment with two code-mixing methods: (1) Code-Mixrandom , which randomly substitutes entities with existing entities of the same type from other languages, and (2) Code-Mixess , which adopts 2256 #Gold Method Monolingual Cross-lingual En De Es Nl Avg En De En Es En Nl Avg 100 Gold-Only 50.57 39.47 42.93 21.63 38.65 39.54 37.40 39.27 38.74 Label-wise 61.34 55.00 59.54 27.85 50.93 45.85 43.74 50.51 46.70 MLM-Entity 61.22 50.96 61.29 46.59 55.02 47.96 45.42 49.34 47.57 DAGA 68.06 59.15 69.33 45.64 60.54 52.95 46.72 54.63 51.43 MELM w/o linearize 70.01 61.92 65.07 59.76 64.19 48.70 49.10 53.37 50.39 MELM (Ours) 75.21 64.12 75.85 66.57 70.44 56.56 53.83 60.62 57.00 200 Gold-Only 74.64 62.85 72.64 55.96 66.52 54.95 51.26 60.71 55.64 Label-wise 76.82 67.31 78.34 66.52 72.25 55.01 53.14 63.30 57.15 MLM-Entity 79.16 70.01 78.45 66.69 73.58 60.44 57.72 68.37 62.18 DAGA 79.11 69.82 78.95 68.53 74.10 59.58 57.68 65.74 61.00 MELM w/o linearize 81.77 71.41 80.43 72.92 76.63 62.57 63.49 70.18 65.41 MELM (Ours) 82.91 72.71 80.46 77.02 78.27 65.01 63.71 70.37 66.36 400 Gold-Only 81.85 70.77 80.02 74.60 76.81 65.76 61.57 71.04 66.12 Label-wise 84.62 74.33 81.01 77.87 79.46 66.18 67.43 71.93 68.51 MLM-Entity 83.82 74.66 81.08 77.90 79.37 67.41 70.28 74.31 70.67 DAGA 84.36 72.95 82.83 78.99 79.78 66.77 67.13 72.40 68.77 MELM w/o linearize 85.16 75.42 82.34 79.34 80.56 68.02 66.01 72.98 69.00 MELM (Ours) 85.73 77.50 83.31 80.92 81.87 68.08 70.37 75.78 71.74 800 Gold-Only 86.35 78.35 83.23 83.86 82.95 65.31 68.28 72.07 68.55 Label-wise 86.72 78.21 84.42 84.26 83.40 65.60 72.22 74.77 70.86 MLM-Entity 86.50 78.30 84.09 83.93 83.20 65.42 69.10 74.85 69.79 DAGA 86.61 77.66 84.64 84.90 83.45 68.76 70.97 75.02 71.58 MELM w/o linearize 87.35 78.58 84.59 84.94 83.99 67.37 71.53 75.20 71.37 MELM (Ours) 87.59 79.32 85.40 85.17 84.37 67.95 75.72 75.25 72.97 Table 1: Left side of table shows the results of monolingual low-resource NER.",
"Experimental results in Table 2 show that both methods are able to achieve improved performance over Gold-Only.",
"This observation suggests that code-mixing techniques, either random code-mixing or code-mixing via our entity similarity search, are indeed helpful for multilingual NER.",
"Comparing these two methods, the performance gains brought by Code-Mixess are more significant and consistent across different low-resource levels, which demonstrates the effectiveness of our proposed entity similarity search algorithm.",
"Applying MELM on both gold data and code-mixed data from Code-Mixess , the multilingual NER results are further improved.",
"In summary, our proposed MELM is well-suited for multilingual NER, which can be integrated with our code-mixing technique to achieve further improvement.",
"Apart from the quantitative results, we further analyze the augmented data to demonstrate the effectiveness of our MELM in maintaining the consistency between the original label and the augmented token.",
"Table 3 presents examples of the top-5 predictions from pretrained MLM, MELM w/o linearize and MELM.",
"As we can see, the pretrained MLM, which does not introduce any design or contraint on data augmentation, tends to generate high-frequency words such as the, he and she, and the majority of generated words do not belong to the original entity class.",
"Being finetuned on NER data with entity-oriented masking, MELM 2257 Text EU rejects German call to boycott British Lamb Label B-ORG O B-MISC O O O B-MISC O MLM Britain, EU,UK, Trump, US US, a, UN, the, UK the, a, black, white, young MELM w/o linearize EU, Australia, US, UN, Israel German, Indian, the, Washington, Union Chinese, British, raw, California, Australian MELM EU, Greenpeace, Amnesty, UN, Reuters German, British, Dutch, French, EU African, British, Guinean, white, French Text Clinton aide resigns , NBC says Label B-PER O O O B-ORG O MLM my, his, My, When, her he, she, it, and, who MELM w/o linearize French, German, British, Swiss, Russian Reuters, Pompeo, Blair Hill, AFP MELM French, White, Walker, Ferguson, David NBC, AFP, Greenpeace, BBC, Anonymous Table 3: Examples of the top-5 predictions by MLM, MELM w/o linearize and MELM.",
"w/o linearize is able to generate more entity-related tokens.",
"However, without the explicit guidance from entity labels, it is still too difficult for MELM w/o linearize to make valid predictions solely based on the ambiguous context (e.g., both Pompeo (PER) and Reuters (ORG) are compatible with the context of Example #2), which leads to token-label misalignment.",
"Compared to the above methods, our MELM take both label information and context into consideration, and thus generates more entities that fit into the context and align with the original label as well.",
"Moreover, it is noteworthy that MELM can leverage the knowledge from pretrained model to generate real-world entities that do not exist in the original NER dataset (e.g., Greenpeace and Amnesty), which essentially increases the entity diversity in training data.",
"As demonstrated in Lin et al. (2020) and our preliminary experiments in Figure 1, introducing unseen entities can effectively provide more entity regularity knowledge, and helps to improve NER performance.",
"Therefore, we examine the amount of unique entities introduced by different methods.",
"As there might be token-label misalignment in the augmented data, we firstly train an oracle' NER model on the full CoNLL dataset and then use it to tag training data of MELM and different baseline methods.",
"For each method, we count the total number of unique entities whose labels match the labels assigned by the oracle' model.",
"As shown in Figure 4, while many augmented entities from MLM-Entity, DAGA and MELM w/o linearize are filtered out due to token-label misalignment, we note that MELM introduces a significantly larger number of unseen entities in the augmented data.",
"Therefore MELM is able to provide richer entity Figure 4: Comparison between the number of unique valid entities introduced by different methods regularity knowledge, which explains its superiority over the baseline methods.",
"On sentence level tasks, one line of data augmentation methods are built upon word-level modifications, which can be based on synonym replacement (Wei and Zou, 2019), LSTM language model (Kobayashi, 2018), MLM (Wu et al., 2019; Kumar et al., 2020), auto-regressive pretrained LM (Kumar et al., 2020), or constituent-based tagging schemes (Zhong et al., 2020).",
"However, these methods suffer from token-label misalignment when applied to token-level tasks such as NER, which requires sophisticated post-processing to remove noisy samples in augmented data (Bari et al., 2021; Zhong and Cambria, 2021).",
"Existing works avoid token-label misalignment by replacing entities with existing entities of the same class (Dai and Adel, 2020), or only modifying context works and leaving entities / aspect terms unchanged (Li et al., 2020a).",
"Others attempt to produce augmented data by training / fine-tuning 2258 a generative language model on linearized labeled sequences (Ding et al., 2020; Liu et al., 2020).",
"Backtranslation (Sennrich et al., 2016; Fadaee et al., 2017; Dong et al., 2017; Yu et al., 2018) translates source language sentences into a target language, and subsequently back to the source language, which preserve the overall semantics of the original sentences.",
"On token-level tasks, however, they hinge on external word alignment tools for label propagation, which are often error-prone (Tsai et al., 2016; Li et al., 2020b).",
"We have proposed MELM as a data augmentation framework for low-resource NER.",
"Through labeled sequence linearization, we enable MELM to explicitly condition on label information when predicting masked entity tokens.",
"Thus, our MELM effectively alleviates the token-label misalignment issue and generates augmented data with novel entities by exploiting pretrained knowledge.",
"Under multilingual settings, we integrate MELM with code-mixing for further performance gains.",
"Extensive experiments show that the proposed framework demonstrates encouraging performance gains on monolingual, cross-lingual and multilingual NER across various low-resource levels.",
"This research is partly supported by the Alibaba-NTU Singapore Joint Research Institute, Nanyang Technological University.",
"Erik Cambria would like to thank the support by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project #A18A2b0046)."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"Recent dialogue coherence models use the coherence features designed for monologue texts, e.g. nominal entities, to represent utterances and then explicitly augment them with dialogue-relevant features, e.g., dialogue act labels.",
"It indicates two drawbacks,",
"(a) semantics of utterances is limited to entity mentions, and",
"(b) the performance of coherence models strongly relies on the quality of the input dialogue act labels.",
"We address these issues by introducing a novel approach to dialogue coherence assessment.",
"We use dialogue act prediction as an auxiliary task in a multi-task learning scenario to obtain informative utterance representations for coherence assessment.",
"Our approach alleviates the need for explicit dialogue act labels during evaluation.",
"The results of our experiments show that our model substantially (more than 20 accuracy points) outperforms its strong competitors on the Dai-lyDialogue corpus, and performs on par with them on the SwitchBoard corpus for ranking dialogues concerning their coherence.",
"We release our source code 1 .",
"Considering rapid progresses in developing open-domain dialogue agents (Serban et al., 2016; Ghazvininejad et al., 2018; Dinan et al., 2019; Li et al., 2019), the need for models that compare these agents in various dialogue aspects becomes extremely important (Liu et al., 2016; Dinan et al., 2019).",
"Most available methods for dialogue evaluation rely on word-overlap metrics, e.g. BLEU, and manually collected human feedback.",
"The former does not strongly correlate with human judgments (Liu et al., 2016), and the latter is time-consuming and subjective.",
"A fundamental aspect of dialogue is coherence what discriminates a high-quality 1 https://github.com/UKPLab/ acl2020-dialogue-coherence-assessment utterances shared utterance encoder DAP model DA labels DiCoh model coherence score Figure 1: A high-level view of our multi-task learning approach for dialogue coherence modeling.",
"dialogue from a random sequence of dialogue utterances (Halliday and Hasan, 1976; Grosz and Sidner, 1986; Byron and Stent, 1998).",
"Dialogue coherence deals with semantic relations between utterances considering their dialogue acts (Perrault and Allen, 1978; Cervone et al., 2018).",
"A Dialogue Act (henceforth DA ) gives a meaning to an utterance in a dialogue at the level of illocu-tionary force, and therefore, constitutes the basic unit of communication (Searle, 1969; Raheja and Tetreault, 2019).",
"A DA captures what a speaker's intention is of saying an utterance without regard to the actual content of the utterance.",
"For example, a DA may indicate whether the intention of stating an utterance is to ask a question or to state a piece of information.",
"Recent approaches to dialogue coherence modeling use the coherence features designed for monologue texts, e.g. entity transitions (Barzilay and Lapata, 2005), and augment them with dialogue-relevant features, e.g., DA labels (Cervone et al., 2018).",
"These DA labels are provided by human annotators or DA prediction models.",
"Such coherence models suffer from the following drawbacks:",
"(a) they curb semantic representations of utterances to entities, which are sparse in dialogue because of short utterance lengths, and",
"(b) their performance relies on the quality of their input DA labels.",
"We propose a novel approach to dialogue coherence assessment by utilizing dialogue act prediction as an auxiliary task for training our coherence model in a multi-task learning (MTL) scenario (Fig-ure 1).",
"Our approach consists of three high-level components: an utterance encoder , a dialogue coherence model (DiCoh) , and a Dialogue Act Prediction (DAP) model .",
"The layers of the utterance encoder are shared between the DAP and the DiCoh model.",
"This idea enables our DiCoh model to learn to focus on salient information presented in utterances considering their DAs and to alleviate the need for explicit DA labels during coherence assessment.",
"We evaluate our MTL-based approach on the DailyDialog (Li et al., 2017) and SwitchBoard (Ju-rafsky and Shriberg, 1997) English dialogue corpora in several discriminating experiments, where our coherence model, DiCoh, is examined to discriminate a dialogue from its perturbations (see Table 1).",
"We utilize perturbation methods, like utterance ordering and utterance insertion , inherited from coherence evaluation approaches for monologue texts, and also introduce two dialogue-relevant perturbations, named utterance replacement and even utterance ordering .",
"Our core contributions are: (1) proposing an MTL-based approach for dialogue coherence assessment using DAP as an auxiliary task, yielding more informative utterance representations for coherence assessment; (2) alleviating the need for DA labels for dialogue coherence assessment during evaluations; (3) an empirical evaluation on two benchmark dialogue corpora, showing that our model substantially outperforms the state-of-the-art coherence model on DailyDialog, and performs on par with it on SwitchBoard.",
"Early approaches to dialogue coherence modeling are built upon available models for monologue, such as the EntityGrid model (Barzilay and Lap-ata, 2005, 2008).",
"EntityGrid and its extensions (Burstein et al., 2010; Guinaudeau and Strube, 2013; Mesgar and Strube, 2014; Tien Nguyen and Joty, 2017; Farag and Yannakoudakis, 2019) rely on entity transitions, as proxies of semantic connectivity, between utterances.",
"These approaches are agnostic to discourse properties of dialogues (Purandare and Litman, 2008; Gandhe and Traum, 2008; Cervone et al., 2018).",
"Inspired by EntityGrid, Gandhe and Traum (2016) define transition patterns among DA labels associated with utterances to measure coherence.",
"Cervone et al. (2018) combine the above ideas by augmenting entity grids with utterance DA labels.",
"This model restricts utterance vectors only to entity mentions, and needs gold DA labels as its inputs for training as well as evaluation.",
"However, obtaining DA labels from human annotators is expensive and using DAP models makes the performance of coherence model dependent on the performance of DAP models.",
"Recent approaches to dialogue coherence modeling benefit from distributional representations of utterances.",
"Zhang et al. (2018) quantify the coherence of dialogue using the semantic similarity between each utterance and its preceding utterances.",
"This similarity is estimated, for example, by the cosine similarity between an utterance vector and a context vector where those vectors are the average of their pre-trained word embeddings.",
"Vakulenko et al. (2018) measure dialogue coherence based on the consistency of new concepts introduced in a dialogue with background knowledge.",
"Similarly, Dziri et al. (2019) utilize a natural language inference model to assess the content consistency among utterances as an indicator for dialogue coherence.",
"However, these approaches lack dialogue-relevant information to measure coherence.",
"Our MTL-based approach solves these issues:",
"(i) it benefits from DAs and semantics of utterances to measure dialogue coherence by optimizing utterance vectors for both DAP and coherence assessment, and",
"(ii) it uses DA labels to define an auxiliary task for training the DiCoh model using MTL, instead of utilizing them in a pipeline.",
"Therefore, it efficiently mitigates the need for explicit DA labels as inputs during coherence assessment.",
"We represent a dialogue between two speakers as a sequence of utterances, dial = [ utt 1 , ..., utt m ] .",
"We address the problem of designing a coherence model, DiCoh, which assigns a coherence score to dial , s dial = DiCoh ( dial ) .",
"Given a pair of dialogues = ( dial i , dial j ) , our DiCoh model ideally assigns s dial i > s dial j if and only if dialogue dial i is preferred over dialogue dial j according to their perceived coherence.",
"Instead of using gold DA labels as inputs to DiCoh, we use them to define an auxiliary task and model, DAP, to enrich utterance vectors for DiCoh in an MTL scenario.",
"Figure 2 shows a low-level illustration of our MTL-based approach.",
"Utterance encoder We use a word embedding layer, Emb , to transform the words in utterance utt = [ w 1 , ..., w n ] to a sequence of embedding vectors E = [ e 1 , ..., e n ] , where n is the number of words in utt .",
"The embedding layer can be initialized by any pre-trained embeddings to capture lexical relations.",
"We use a Bidirectional recurrent neural network with Long Short-Term Memory cells, BiLSTM , to map embeddings E to encode words in their utterance-level context: E = Emb ( utt ) , H u = BiLSTM ( E ) , (1) where H u shows the hidden state vectors [ h u 1 , ..., h un ] returned by BiLSTM .",
"At word t , h ut is the concatenation of hidden states of the forward h ut and the backward LSTMs h ut : h u t = [ h u t ; h u t ] .",
"(2) We apply a self-attention mechanism, Atten , to the hidden state vectors in H u to obtain the vector representation, u , of utterance utt : u = Atten ( H u ) .",
"(3) Generally, the attention layer, Atten , for an input vector x is defined as follows: t = x t W, t = exp ( t ) (cid:80) t exp ( t ) , o = (cid:88) t t x t , (4) where W is the parameter of this layer, and o is its weighted output vector.",
"Attention enables the utterance representation layer to encode an utterance by the weighted sum of its word embeddings.",
"It is worth noting that the parameters of the utterance encoder are shared for representing all utterances in a dialogue.",
"DiCoh model For an input dialogue dial = [ utt 1 , ..., utt m ] , the output of the utterance representation encoder is a sequence of vectors, i.e., [ u 1 , ..., u m ] .",
"Our coherence assessment model (DiCoh) combines these vectors by a BiLSTM to obtain dialogue-level contextualized representations of utterances.",
"Then, a self-attention (Equation 4) with new parameters computes the weighted average of the contextualized utterance vectors to encode the dialogue: [ h d 1 , ..., h dm ] = BiLSTM ([ u 1 , ..., u m ]) , d = Atten ([ h d 1 , ..., h dm ]) .",
"DAP model Our DAP model, which is used to solve the auxiliary DAP task, is a softmax layer which maps an utterance vector, u , to a probability distribution p a over DA labels A :",
"where W | u || A | shows the weights of the softmax layer, | u | is the size of the utterance vector, | A | is the number of DA labels, and b is the bias.",
"As illustrated in Figure 1, our main idea is to benefit from the DAP task for improving the performance of the dialogue coherence model by using them in a multi-task learning scenario.",
"We also assume that each utterance utt k is associated with DA label, a k , during training but not during evaluation.",
"We define a loss function for each task, and then use their weighted average as the total loss.",
"The DAP loss function for dialogue dial is the average cross-entropy: L dial da = 1 m (cid:88) k (1 ,...,m ) a k log( p a ( u k )) , (8) preference loss L pcoh utt 1 = [ w 1 ,...,w n ] , Emb [ e 1 ,...,e n ] BiLSTM [ h u 1 ,...,h un ] Atten u 1 ... ... ... utt m = [ w 1 ,...,w n ] , Emb [ e 1 ,...,e n ] BiLSTM [ h u 1 ,...,h un ] Atten u m BiLSTM [ h d 1 ,...,h dm ] Atten d Linear s dial i softmax ... a 1 ... a m dial i avg.",
"where m is the number of utterances in dialogue, and a k is the one-hot vector representation of the gold DA label associated with the k th utterance.",
"log ( p a ) is the natural log of probabilities over DA labels, which is obtained in Equation 7.",
"Inspired by preference learning approaches (e.g. the proposed method by Gao et al. (2019) for text summarization) we define the loss function for coherence assessment through pairwise comparisons among dialogues.",
"Given dialogue pair = ( dial i , dial j ) and its preference coherence label, l c = (cid:26) 0 if dial i is preferred over dial j , 1 otherwise, (9) the coherence loss is: L coh = max { 0 , 1 s [ l c ] + s [1 l c ] } , (10) where [ . ] is the indexing function.",
"More formally, s [ l c ] and s [1 l c ] are the coherence scores of the coherent and incoherent dialogue in pair = ( dial i , dial j ) , respectively.",
"Finally, the total loss value is the weighted combination (Kendall et al., 2018) of the above losses: L = L coh 21 + ( L dial i da + L dial j da ) 22 +log( 1 )+log( 2 ) , (11) where L dial i da and L dial j da are the losses of DAP for dialogues in pair = ( dial i , dial j ) , 1 and 2 are trainable parameters to balance the impact of losses.",
"We compute the gradient of L to update the parameters of both DiCoh and DAP models.",
"We compare our approach with several previous dialogue coherence models on DailyDialog (Li et al., 2017) and SwitchBoard (Jurafsky and Shriberg, 1997) as two benchmark English dialogue corpora.",
"Table 2 shows some statistics of these corpora.",
"DailyDialog contains human-written dialogues about daily topics (e.g. ordinary life, relationships, work, etc) collected by crowd-sourcing.",
"Crowd-workers also annotated utterances with generic DA labels from the set { Inform, Question, Directive, Commissive } .",
"Dialogues in this corpus contain a few utterances ( 8 ) making them more on topic DailyDialog SwitchBoard # dialogues 13 , 118 1 , 155 # DA labels 4 42 avg.",
"SwitchBoard contains informal English dialogues collected from phone conversations between two mutually unknown human participants.",
"The participants were given only one of 70 possible topics as initial topic to start a conversation but they were free to diverge from that topic during the conversation.",
"So, there is no concrete topic associated with each dialogue in this dataset as it is the case for dialogues in DailyDialog.",
"DA labels in SwitchBoard are about 10 times more fine-grained than those in DailyDialog.",
"For example, a question utterance in SwitchBoard may have a fine-grained DA label such as Yes-No-Question, Wh-Question, Rhetorical-Questions, etc.",
"The distribution of these acts is however highly unbalanced in SwitchBoard: the most frequent act label makes up for 36% of the utterances in the corpus, the three most frequent acts together make up for 68% of the utterances, while most of the remaining act labels just make up for 1% or less of all the utterances.",
"On average, dialogues in SwitchBoard contain more utterances than those in DailyDialog ( 192 vs 8 ) but utterances in SwitchBoard are shorter than those in DailyDialog ( 9 vs 15 ).",
"This means that dialogues in SwitchBoard are more likely to span different topics than the ones in DailyDialog.",
"The utterances in DailyDialog are explicitly cleaned of any noise, like uh-oh, or interruptions by the other speaker, as it is commonly the case for dialogues in SwitchBoard.",
"While each dialogue turn of dialogues in DailyDialog contains only one utterance, dialogue turns in SwitchBoard may consist of several utterances.",
"That is why we consider each dialogue as a sequence of dialogue utterances.",
"The goal of our experiments is to assess if a coherence model assigns coherence scores to dialogues so that a more coherent dialogue obtains a higher score than a less coherent one.",
"Since dialogues in the examined corpora, i.e. DailyDialog and SwitchBoard , are not associated with any coherence assessment score, we synthetically define four perturbation methods to destroy the coherence of dialogues in these corpora, and create a set of dialogue pairs for training and testing coherence models.",
"We borrow Utterance Ordering (UO) and Utterance Insertion (UI) from previous studies on coherence assessment (Barzilay and Lapata, 2005; Cervone et al., 2018) and also introduce Utterance Replacement (UR), and Even Utterance Ordering (EUO) as more challenging and dialogue-relevant perturbation methods.",
"Since each experiment follows a specific perturbation method, henceforth, we refer to these perturbations as problem-domains: Utterance Ordering (UO) We randomly permute the order of utterances in dialogue.",
"The original dialogue is preferred over the perturbed one.",
"Utterance Insertion (UI) We remove each utterance of a dialogue and then re-insert it in any possible utterance position in the dialogue.",
"We assume that the original place of the utterance is the best place for the insertion.",
"Therefore, a coherence model ideally discriminates the original dialogue from the perturbed ones, which are obtained by re-inserting the removed utterance in any utterance position except its original one.",
"This problem-domain is more difficult to solve than UO as the distinction between dialogues is in the position of only one utterance.",
"Utterance Replacement (UR) We randomly replace one of the utterances in a dialogue with another utterance that is also randomly selected from another dialogue.",
"The original dialogue is preferred over the dialogue generated by UR.",
"Unlike the other problem-domains, which perturb the structure of a dialogue, this problem-domain perturbs the coherence of a dialogue at its semantic level.",
"Even Utterance Ordering (EUO) This problem-domain is similar to UO but here we re-arrange the order of utterances that are said by one speaker and keep the order of the other utterances, which are said by the other speaker, fixed.",
"Therefore, EUO is more challenging and dialogue-relevant than UO.",
"This problem-domain assesses to what extent coherence models capture the coherence among utterances that are said by one of the speakers in a dialogue.",
"To create dialogue pairs for each problem-domain, we use the splits provided by the DailyDialog corpus; and for SwitchBoard we take 80% of dialogues for the training, 10% for the validation and 10% for the test sets.",
"Following Cervone et al. (2018), for any dialogue in each set we create 20 perturbations where each of which makes two pairs with the original dialogue.",
"Given dialogue dial i and its perturbation dial j , we define two dialogue pairs: ( dial i , dial j ) with preference coherence label l c = 0 and ( dial j , dial i ) with label l c = 1 .",
"In this evaluation, we train, fine-tune, and evaluate our models on the training, validation, and test sets of each problem-domain.",
"Note that these sets are constructed by the same perturbation method.",
"Compared coherence models We compare the following coherence models in this evaluation: (1) Random: This baseline model randomly ranks dialogues in an input dialogue-pair.",
"(2) CoSim (Zhang et al., 2018; Xu et al., 2018) : This model represents utterances by averaging the pre-trained embeddings of their words.",
"Then, the average of the cosine similarities between vectors of adjacent utterances is taken as the coherence score.",
"In this model, utterance vectors are made using content words by eliminating all stop words.",
"(3) ASeq (Gandhe and Traum, 2016) : This model relies only DAs transitions and is agnostic to semantic relationships (such as entity transitions) between utterances.",
"Coherence features in this model are the probabilities of n-grams across the sequence of DAs associated with the utterances in dialogue.",
"These features are supplied to a SVM to rank dialogues.",
"(4) EAGrid (Cervone et al., 2018) : This is the best performing model presented by Cervone et al. (2018) that benefits from both entity and DA transitions between utterances.",
"It represents semantic relationships across utterances via a grid, whose rows are associated with utterances and all columns represent entities but one that represents DAs.",
"Entities are a set of mentions that are extracted by a co-reference system.",
"Entries at the intersections between entity columns and an utterance row represent the grammatical role of an entity in an utterance.",
"The intersection of the DA column and an utterance shows the DA label of the utterance.",
"Cervone et al. (2018) use grammatical role transitions of entities as well as DA label transitions across utterances as indicative patterns for coherence.",
"The frequencies of these patterns are taken as coherence features, which are supplied to Support Vector Machines (SVMs) to discriminate dialogues with respect to their coherence.",
"(5) S-DiCoh: This is our coherence model, DiCoh, trained by only the supervision signal for coherence ranking, with the total loss L = L coh (see Equation 11).",
"This model does not benefit from DA information to enrich utterance vectors.",
"(6) M-DiCoh: This is our full model trained by the proposed MTL using the supervision signals for both coherence ranking and DAP.",
"The main advantage of this model is that it learns to focus on salient information of utterances for coherence assessment based on the given DAs for utterances.",
"We follow former coherence papers (Barzilay and Lapata, 2008; Guinaudeau and Strube, 2013; Mesgar and Strube, 2018; Cervone et al., 2018) and use accuracy as the evaluation metric.",
"In our experiments, this metric equals the frequency of correctly discriminated dialogue pairs in the test set of a problem-domain.",
"# of dialogue pairs",
"(12)",
"To reduce the risk of randomness in our experiments, we run each experiment five times with varying random seeds and report their average (Reimers and Gurevych, 2018).",
"Settings Each batch consists of 128 and 16 dialogue-pairs for the DailyDialog and SwitchBoard corpora, respectively.",
"Utterances are zero-padded and masked.",
"We use pretrained GloVe embeddings (Pennington et al., 2014) of size 300 wherever word embeddings are required (i.e., in CoSim, S-DiCoh, and M-DiCoh).",
"For the CoSim model, we use the SMART English stop word list (Salton, 1971) to eliminate all stop words.",
"For the ASeq model, we use bi-grams of DA labels to define the coherence features (Cervone et al., 2018).",
"All parameters of the EAGrid model have the same value as the best performing model proposed by Cervone et al. (2018).",
"In DiCoh, the size of the hidden states in LSTMs of the utterance module is 128 and of the dialogue module is 256 .",
"The parameters of this model are optimized using the Adam optimizer where its parameters have default values except the learning rate which is initiated with 0 .",
"0005 .",
"A dropout layer with p = 0 .",
"1 is applied to the utterance vectors.",
"We DailyDialog SwitchBoard Model UO UI UR EUO UO UI UR EUO Random 50 .",
"train the model for 20 epochs on DailyDialog and 10 epochs on SwitchBoard and evaluate it at the end of each epoch on the validation set.",
"The best performing model on the validation set is used for the final evaluation on the test set.",
"Parameters 1 and 2 (see Equation 11) are initiated with 2 .",
"0 and are updated during training.",
"To have fair comparisons, we train and evaluate all compared models on identical training, validation, and test sets.",
"Results Table 3 shows the accuracy of the baseline models (top) and our model (bottom) on DailyDialog and SwitchBoard.",
"We investigate how well our DiCoh model performs in comparison with its baseline peers that do not take DAs into account, i.e., Random and CoSim.",
"We observe that S-DiCoh strongly outperforms these models for all the examined problem-domains on both DailyDialog and SwitchBoard, confirming the validity of our DiCoh model for capturing the semantics of utterances.",
"In a more challenging comparison, we compare S-DiCoh with ASeq and EAGrid as the baseline models that use DA information.",
"Our S-DiCoh even surpasses these models for all problem-domains on DailyDialog.",
"However, on SwitchBoard, S-DiCoh achieves lower accuracy than these models for all problem-domains except UI.",
"This observation shows that when dialogue utterances are short (like those in SwitchBoard in comparison with those in DailyDialog), DAs are more crucial for coherence assessment.",
"It is worth noting that unlike EAGrid and ASeq, S-DiCoh is completely agnostic to DA information.",
"When we employ DAP as an auxiliary task to train the DiCoh model in our MTL setup, we observe that M-DiCoh substantially outperforms the Random, CoSim, and S-DiCoh models (which do not use DAs) for all problem-domains on both DailyDialog and SwitchBoard.",
"It concludes that our proposed MTL approach effectively leverages the DAP task to learn informative utterance vectors for dialogue coherence assessment.",
"Compared with the ASeq and EAGrid models, which explicitly use gold DA labels during evaluations, our M-DiCoh achieves the highest accuracy for all problem-domains on DailyDialog, showing that our approach for involving DAs yields more informative utterance representations for coherence assessments.",
"However, on SwitchBoard, M-DiCoh increases the accuracy of S-DiCoh up to those of EAGrid for UO and EUO.",
"Surprisingly, it achieves lower accuracy than what EAGrid achieves for UR.",
"An explanation for why M-DiCoh outperforms ASeq and EAGrid on DailyDialog but not on SwitchBoard might be that the ASeq and EAGrid models explicitly use gold DA labels during evaluation but M-DiCoh does not; and the DA labels in SwitchBoard are about 10 times higher fine-grained than those in DailyDialog (see Table 2).",
"This interpretation becomes more concrete by observing a considerable reduction in the performance of ASeq and EAGrid when they are evaluated on DailyDialog compared with when they are evaluated on SwitchBoard.",
"In contrast, our M-DiCoh, which uses DAs only during training to obtain better utterance vectors, performs almost evenly on both corpora.",
"Since our model does not need DA labels during evaluations, it is more suitable than the examined models for evaluating dialogue coherence in real scenarios.",
"Finally, to shed some light on which parts of a dialogue receive higher attentions by our M-DiCoh model, we analyze the attention weights it assigns to utterance words.",
"Table 4 illustrates the attention weights for an example dialogue from the training set of the UO problem-domain on DailyDialog, where words with higher attention weights are darker than the those with lower attention weights.",
"We observe that using dialog act prediction as an auxiliary task helps our coherence model to assign high attention weights to the salient words in dialogue utterances.",
"The wh-question, adjectives, and UO UI UR EUO 50 55 60 65 70 75 80 85 90 95 100 95 .",
"the verb in questions have higher attention weights; while in other utterances, nouns, e.g. outlet , inexpensive , prices , are more salient.",
"So, our multi-task learning approach yields richer representations of dialog utterances for coherence assessment.",
"In a more challenging evaluation setup, we use the model trained on the training set of one problem-domain to evaluate it on the test sets of the other problem-domains.",
"Therefore, the perturbation methods used for constructing the training sets differ from those used for creating the test sets.",
"We compare EAGrid as the state-of-the-art coherence model, and M-DiCoh as our complete model, for cross problem-domain evaluations on DailyDialog.",
"Results Figure 3 shows the results on the test sets of the problem-domains, where the models are trained on the training set created by the",
"(a) UO,",
"(b) UI,",
"(c) UR, and",
"(d) EUO perturbations.",
"For all perturbations used to construct the training sets, we observe that M-DiCoh outperforms EAGrid for all test perturbations.",
"Interestingly, among all examined perturbations, both M-DiCoh and EAGrid achieve the highest accuracy on UO.",
"We speculate that this perturbation is easy-to-solve as it rearranges all utterances in a dialogue.",
"Cervone et al.",
"(2018) also show that UR is easier to solve than UI.",
"We note a low-discrepancy in the accuracy of the M-DiCoh model on the test set of UO when the model is trained on the training sets of the different examined problem-domains.",
"The biggest drop in accuracy ( 3 . 2 percentage point) on the UO problem-domain is for when the model is trained on the training set of the UR problem-domain.",
"In contrast, we observe a high-discrepancy in the accuracy of the EAGrid model for the UO problem-domain when the model is trained on the training sets of different problem-domains.",
"The accuracy of EAGrid on the test set of UO drops from 71 .",
"72% (when trained for UO) to 58 .",
"7% (when trained for UR).",
"This is about 13 percentage points drop in accuracy.",
"These results confirm that our M-DiCoh model is more robust than the EAGrid model against different types of perturbation.",
"Since using DAP as an auxiliary task improves the performance of our coherence model; in this experiment, we investigate the impact of MTL on the performance of the DAP model.",
"We train our DAP model without any coherence supervision signal, S-DAP, with L = L dialida + L dialjda 2 in Equation 11, and compare it with the model that is trained with our MTL, M-DAP.",
"Results Table 5 shows the F1 metric 2 of these models for our problem-domains on the DailyDialog dataset.",
"This dataset is larger than SwitchBoard, and the frequency of dialogue act labels in this dataset is more balanced than those in SwitchBoard.",
"We use an SVM classifier supplied with Bag-of-Word representations of utterances as a baseline to put our results in context.",
"indi-2 We use F1 because there are more than two DA labels.",
"cating that the employed DAP model is suitable for solving this task.",
"However, we observe that the M-DAP model works on par with the S-DAP model.",
"This observation shows that the information encoded by the coherence model is not useful for solving the dialogue act prediction task.",
"The coherence model captures semantic relations in a dialogue by encoding information about the content of utterances.",
"Dialogue acts, which indicate speak-ers' intentions of stating utterances in a dialogue, are independent of the content of utterances, therefore information learned by the coherence model does not help the DAP model.",
"However, as the other experiments in this paper demonstrate, DAs can help to obtain more informative utterance representations to model dialogue coherence.",
"Our multi-task learning approach relieves the need for explicit DA labels for coherence assessments, which is the main goal of this paper.",
"We propose a novel dialogue coherence model whose utterance encoder layers are shared with a dialogue act prediction model.",
"Unlike previous approaches that utilize these two models in a pipeline, we use them in a multi-task learning scenario where dialogue act prediction is an auxiliary task.",
"Our coherence method outperforms its counterparts for discriminating dialogues from their various perturbations on DailyDialog, and (mostly) performs on par with them on SwitchBoard.",
"Our model",
"(a) benefits from dialogue act prediction task during training to obtain informative utterance vectors, and",
"(b) alleviates the need for gold dialogue act labels during evaluations.",
"These properties holistically make our model suitable for comparing different dialogue agents in terms of coherence and naturalness.",
"For future work, we would like to deeply study the impacts of our perturbations on the coherence of the examined dialogues.",
"We will also investigate to what extent the rankings of dialogues obtained by our model correlate with human-provided rankings.",
"This work was supported by the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1 and grant GU 798/17-1).",
"We thank Kevin Stowe and Leonardo Filipe Rodrigues Ribeiro for their valuable feedback on earlier drafts of this paper.",
"We also thank anonymous reviewers for their useful suggestions for improving the quality of the paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"other",
"other",
"other"
] |
[
"Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers.",
"This has led to a belief that neural encoders can implicitly encode structural constraints, such as siblings and grandparents in a tree.",
"We tested this hypothesis and found that neural parsers may benefit from higher-order features, even when employing a powerful pre-trained encoder, such as BERT.",
"While the gains of higher-order features are small in the presence of a powerful encoder, they are consistent for long-range dependencies and long sentences.",
"In particular, higher-order models are more accurate on full sentence parses and on the exact match of modifier lists, indicating that they deal better with larger, more complex structures.",
"Before the advent of neural networks in NLP, dependency parsers relied on higher-order features to better model sentence structure (McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Martins et al., 2013, inter alia ).",
"Common choices for such features were siblings (a head word and two modifiers) and grandparents (a head word, its own head and a modifier).",
"Kiperwasser and Goldberg (2016) showed that even without higher order features, a parser with an RNN encoder could achieve state-of-the-art results.",
"This led folk wisdom to suggest that modeling higher-order features in a neural parser would not bring additional advantages, and nearly all re-cent research on dependency parsing was restricted to first-order models (Dozat and Manning, 2016; Smith et al., 2018a).",
"Kulmizev et al. (2019) further reinforced this belief comparing transition and graph-based decoders (but none of which higher order); Falenska and Kuhn (2019) suggested that higher-order features become redundant because the parsing models encode them implicitly.",
"However, there is some evidence that neural parsers still benefit from structure modeling.",
"Zhang et al. (2019) showed that a parser trained with a global structure loss function has higher accuracy than when trained with a local objective (i.e., learning the head of each word independently).",
"Falenska and Kuhn (2019) examined the impact of consecutive sibling features in a neural dependency parser.",
"While they found mostly negative results in a transition-based setting, a graph-based parser still showed significant gains on two out of 10 treebanks.",
"particular, we experiment with consecutive sibling and grandparent features in a non-projective , graph-based dependency parser.",
"We found that without a pretrained encoder, these features are only useful for large treebanks; however, when using BERT, they can improve performance on most treebanks we tested on especially true for longer sentences and long-distance dependencies, and full sentence parses 1 .",
"This challenges the hypothesis that encoders can single-handedly improve parsers, or more generally, structured models in general.",
"We use x to refer to a sentence with tokens ( x 1 , x 2 , . . . , x n ) , plus the ROOT pseudo-token, and y to refer to a valid tree composed of n arcs ( h, m )",
"We overload the notation s ( ) to indicate the model score for a part or complete sentence, de-1 Our code is available at https://github.com/ deep-spin/pyturbo/ pending on its arguments.",
"We encode a x with a bidirectional LSTM, producing hidden states ( h 0 , h 1 , . . . , h n ) , with h 0 corresponding to ROOT .",
"Each token is represented by the concatenation of its pretrained word embeddings, a character-level left-to-right LSTM and, optionally, BERT embeddings.",
"Similar to Straka et al. (2019), when using BERT, we take the mean of its last four layers.",
"When the BERT tokenizer splits a token into more than one, we take the first one and ignore the rest, and we use the special token [CLS] to represent ROOT .",
"The word embeddings we use are the ones provided in the CoNLL 2018 shared task.",
"We start with a first-order model, which is used as a pruner before running the second-order parser as in Martins et al. (2013).",
"It uses biaffine attention to compute arc and label scores (Dozat and Manning, 2016), and similarly to Qi et al. (2018), we also add distance and linearization terms.",
"2 We want our pruner to be capable of estimating arc probabilities, and thus we train it with a marginal inference loss, maximizing the log probability of the correct parse tree y : L ( x , y ) = log p ( y | x ) = s ( y ) + log (cid:88) i exp( s ( y i )) .",
"We can compute the partition function over all possible trees y i efficiently using the Matrix-Tree Theorem (Koo et al., 2007), which also gives us arc marginal probabilities.",
"The sentence score s ( x , y ) is computed as the sum of the score of its parts.",
"Additionally, we try first-order models trained with a hinge loss, as Zhang et al. (2019) (also used with our second-order models; see 2.4), maximizing the margin between the correct parse tree y and any other tree y : L ( x , y ) = max y [ s ( x , y ) s ( x , y ) + ( y , y )] , where ( y , y ) is the Hamming cost between y and y , i.e., the number of arcs in which they differ.",
"2 We refer the reader to Qi et al. (2018) for further definition of the distance and linearization terms.",
"Also, like them, we only backpropagate error for these scores for the gold arcs.",
"We train second-order models with a hinge loss.",
"It is computed in the same way as in the first-order case, except now the sentence scores include second-order parts.",
"Notice that the Hamming cost still only considers differing arcs.",
"Consecutive siblings A consecutive sibling part is a tuple ( h, m, s ) such that h is the parent of both m and s , which are both to the left or to the right of h , and no other child of h exists between them.",
"Additionally, we consider tuples ( h, m, ) to indicate that m is the first child (if to the left of h ) or the last child (if to the right).",
"Grandparents A grandparent part is a tuple ( h, m, g ) such that g is the parent of h and h is the parent of m .",
"There are no grandparent parts such that h is ROOT .",
"Scoring The score for a higher order part ( h, m, r ) of type (in our case, either grandparent or consecutive sibling) is computed as: s ( h, m, r ) = w (cid:62) ( 1 tanh( h h + h r ) + 2 tanh( h m + h r ) + 3 tanh( h h + h m + h r )) , h h = f h ( h h ) , h m = f m ( h m ) , h r = f r ( h r ) .",
"where 1 , 2 and 3 are learnable scalars, w is a learnable vector, f h ( ) , f m ( ) and f r ( ) are learnable affine transforms.",
"There is a set of these parameters for consecutive siblings and another for grandparents.",
"The factors that compose the score represent different combinations of a second-order part with h , m , or both.",
"There is no factor combining h and m only, since they are already present in the first-order scoring.",
"We also introduce a parameter vector h to account for .",
"Decoding The drawback of higher-order feature templates is that exact decoding is intractable for the non-projective case.",
"Classically, researchers have resorted to approximate decoding as well as using a first-order parser to eliminate unlikely arcs and their respective higher-order parts.",
"We employ both of these techniques; specifically, we use the dual decomposition algorithm AD 3 (Martins et al., 2011, 2013) for decoding, which often arrives at the exact solution.",
"We use head automata factors to handle sibling and grandparent structures (Koo et al., 2010), and the traditional Chu-Liu-Edmonds algorithm to handle the tree constraint factor (Mc-Donald et al., 2005).",
"Multitask Learning Our models also predict UPOS, XPOS and morphology tags (UFeats), as training for these additional objectives increases parsing performance.",
"They are implemented via softmax layers on top of the BiLSTM output, and have a cross-entropy loss.",
"Parser and tagger share two BiLSTM layers, with an additional layer for each one (similar to Straka, 2018).",
"We only consider UFeats singletons in the training data, i.e., we do not decompose them into individual features.",
"Perturb and MAP During training with a hinge loss, we add noise sampled from a standard Gum-bel distribution to the arc scores, as in Papandreou and Yuille (2011).",
"This effectively makes decoding behave as sampling from the tree space.",
"Data We evaluate our models on 19 treebanks from Universal Dependencies 2.3: Afrikaans (Afri-Booms), Ancient Greek (Perseus), Arabic (PADT), Basque (BDT), Chinese (GSD), Czech (PDT), Finnish (TDT), Hebrew (HTB), Hindi (HDTB), Hungarian (Szeged), Italian (ISDT), Japanese (GSD), Korean (GSD), Persian (Seraji), Portuguese (Bosque), Russian (SynTagRUS), Swedish (Tal-banken) and Turkish (IMST).",
"In all cases, we use gold tokenization.",
"They represent varied language families, writing systems and typology, inspired by Smith et al. (2018b).",
"Hyperparameters All LSTM cells have 400 units in each direction, as well as arc and label biaffine projections.",
"Second-order layers have 200 units, and character embeddings have 250.",
"We apply dropout with p = 0 .",
"5 to all linear layers, and we use word dropout (replacing an encoded word vector with a trainable vector) with p = 0 .",
"33 in models without BERT and 0.2 in the ones with it.",
"We use Adam with 1 = 0 .",
"9 , 2 = 0 .",
"99 , and con-stant learning rate of 10 3 for the first-order models without BERT and 5 10 4 for all others.",
"We used bert-chinese for Chinese and Japanese, and bert-base-multilingual-cased for other languages; and did not fine-tune its weights.",
"We run the AD 3 decoder for up to 500 iterations with a step size of 0.05.",
"We use batches of 1,000 tokens for first-order models and 800 for second-order, and train for up to 100k batches.",
"We evaluate on the dev set each 200 batches and stop early after 50 evaluations without improvement.",
"Pruning Before training or evaluating a second-order parser, we run a first-order model trained with marginal inference to prune unlikely arcs and any second-order parts including them.",
"When using BERT in the main parser, we also use a pruner trained with BERT.",
"We keep up to 10 candidate heads for each token, and further prune arcs with posterior probability lower than a threshold t times the probability of the most likely head.",
"Without BERT, t = 10 6 , and with it t = 10 8 , as we found BERT makes the pruner overconfident.",
"The lowest pruner recall on the dev set was 98.91% (on Turkish); all other treebanks are above 99%.",
"During training, we never prune out gold arcs.",
"Table 1 shows the test set UAS and LAS for our models.",
"Parsers with BERT and hinge loss achieve the best performance in most datasets; second-order models are generally better at UAS.",
"An interesting case is Ancient Greek, which is not in BERT's pretraining data.",
"First-order models with BERT perform worse than the ones without it in UAS and LAS, but the second-order model achieves the highest UAS.",
"Without BERT, second-order features are only beneficial in some medium-to-large treebanks.",
"In the smallest ones, as Turkish and Hungarian, they actually lead to a performance drop; when using BERT, however, they increase accuracy in these datasets.",
"On the other hand, large treebanks such as Russian and Czech have improvements from second-order features even without BERT.",
"This suggests that in order for them to be beneficial, either large amounts of annotated training data are needed (which not all UD treebanks have) or a powerful encoder such as BERT.",
"Considering first-order models, Zhang et al. (2019) found no particular advantage of a hinge loss objective over a cross-entropy one or vice-versa.",
"In our experiments, this is mostly the case for models trained with small-to-medium treebanks and without BERT.",
"When more training data or a pretrained encoder is available, the hinge loss objective tends to reach higher accuracy than the cross-entropy one.",
"Figures 1, 2 and 3 show LAS by sentence length, dependency length and depth in the tree (distance to root).",
"While BERT reduces the gap between first and second-order models, the latter are consistently more accurate in sentences longer than 10 tokens, and in dependencies longer than four tokens.",
"Varying distance to root shows a somewhat irregular pattern (similar to what Kulmizev et al., 2019 found); the three BERT models are close to each other, but among the other three, the second-order parser is clearly best for depths 29.",
"Table 2 shows complete sentence matches and head words with exact match of their modifier set, over all treebanks.",
"Second-order models are better on both metrics.",
"Table 3 shows results for models that do not employ multitask learning (in our case, jointly learning UPOS, XPOS and morphological features) on the development set for a subset of the treebanks, and the results for the models that employ it on the same data.",
"All models are first order with a probabilistic loss function.",
"MTL parsers performed better except for Arabic UAS, and even then only by a small difference, which motivated us to use MTL in all our experiments.",
"Runtime Our first-order parsers without BERT process 2,000 tokens per second on average, and the second-order ones around 600 (averaged across all treebanks).",
"For models with BERT, the figures Figure 1: LAS by sentence length.",
"are 1,600 and 460, respectively.",
"3 This slowdown of 3.5x for second-order models is even smaller than the ones reported by Martins et al. (2013).",
"We compared second-order dependency parsers to their more common, first-order counterparts.",
"3 Runtime on an NVidia Titan Xp GPU.",
"While their overall performance gain was small, they are distinctively better for longer sentences and long-range dependencies.",
"Considering the exact match of complete parse trees or all modifiers of a word, second-order models exhibit an advantage over first-order ones.",
"Our results indicate that even a powerful encoder as BERT can still benefit from explicit output structure modelling; this would be interesting to explore in other NLP tasks as well.",
"Another interesting line of research would be to evaluate the contribution of higher-order features in a cross-lingual setting, leveraging structure learned from larger treebanks to underresourced languages.",
"This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundacao para a Ciencia e Tecnolo-gia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal)."
] | [
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other"
] |
[
"A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and feature-rich lexicons becoming less central while recurrent neural network representations rise in popularity.",
"The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods.",
"To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments.",
"We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.",
"In the past several years, many aspects of constituency parsing and natural language processing in general have changed.",
"Grammars, which were once the central component of many parsers, have played a continually decreasing role.",
"Rich lexicons and handcrafted lexical features have become less common as well.",
"On the other hand, recurrent neural networks have gained traction as a powerful and general purpose tool for representation.",
"So far, not much has been shown about how neural networks are able to compensate for the removal of the structures used in past models.",
"To gain insight, we introduce a parser that is representative of recent trends and analyze its learned representations to determine what information it captures and what is important for its strong performance.",
"span representation based on recurrent neural networks with a novel, simplified scoring model.",
"In addition, we replace the externally predicted part-of-speech tags used in some recent systems with character-level word representations.",
"Our parser achieves a test F1 score of 92.08 on section 23 of the Penn Treebank, exceeding the performance of many other state-of-the-art models evaluated under comparable conditions.",
"Section 2 describes our model in detail.",
"The remainder of the paper is focused on analysis.",
"In Section 3, we look at the decline of grammars and output correlations.",
"Past work in constituency parsing used context-free grammars with production rules governing adjacent labels (or more generally production-factored scores) to propagate information and capture correlations between output decisions (Collins, 1997; Charniak and Johnson, 2005; Petrov and Klein, 2007; Hall et al., 2014).",
"Many recent parsers no longer have explicit grammar production rules, but still use information about other predictions, allowing them to capture output correlations (Dyer et al., 2016; Choe and Charniak, 2016).",
"Beyond this, there are some parsers that use no context for bracket scoring and only include mild output correlations in the form of tree constraints (Cross and Huang, 2016b; Stern et al., 2017).",
"In our experiments, we find that we can accurately predict parents from the representation given to a child.",
"Since a simple classifier can predict the information provided by parent-child relations, this explains why the information no longer needs to be specified explicitly.",
"We also show that we can completely remove output correlations from our model with a variant of our parser that makes independent span label decisions without any tree constraints while maintaining high F1 scores and mostly producing trees.",
"tom lexical representations, such as word shape features, prefixes, suffixes, and special tokens for categories like numerals (Klein and Manning, 2003; Petrov and Klein, 2007; Finkel et al., 2008).",
"Character-level models have shown promise in parsing and other NLP tasks as a way to remove the complexity of these lexical features (Balles-teros et al., 2015; Ling et al., 2015b; Kim et al., 2016; Coavoux and Crabbe, 2017; Liu and Zhang, 2017).",
"We compare the performance of character-level representations and externally predicted part-of-speech tags and show that these two sources of information seem to fill a similar role.",
"We also perform experiments showing that the representations learned with character-level models contain information that was hand-specified in some other models.",
"Finally, in Section 5 we look at the surface context captured by recurrent neural networks.",
"Many recent parsers use LSTMs, a popular type of recurrent neural network, to combine and summarize context for making decisions (Choe and Charniak, 2016; Cross and Huang, 2016a; Dyer et al., 2016; Stern et al., 2017).",
"Before LSTMs became common in parsing, systems that included surface features used a fixed-size window around the fenceposts at each end of a span (Charniak and Johnson, 2005; Finkel et al., 2008; Hall et al., 2014; Durrett and Klein, 2015), and the inference procedure handled most of the propagation of information from the rest of the sentence.",
"We perform experiments showing that LSTMs capture far-away surface context and that this information is important for our parser's performance.",
"We also provide evidence that word order of the far-away context is important and that the amount of context alone does not account for all of the gains seen with LSTMs.",
"Overall, we find that the same sources of information that were effective for grammar-driven parsers are also captured by parsers based on recurrent neural networks.",
"In this section, we propose a span-based parsing model that combines components from several recent neural architectures for constituency parsing and other natural language tasks.",
"While this system is primarily introduced for the purpose of our analysis, it also performs well as a parser in its own right, exhibiting some gains over comparable work.",
"Abstractly, our model consists of a single scoring function s ( i, j, ) that assigns a real-valued score to every label for each span ( i, j ) in an input sentence. We take the set of available labels to be the collection of all nonterminals and unary chains observed in the training data, treating the latter as atomic units. The score of a tree T is defined as a sum over internal nodes of labeled span scores:",
"We note that, in contrast with many other chart parsers, our model can directly score n -ary trees without the need for binarization or other tree transformations. Under this setup, the parsing problem is to find the tree with the highest score:",
"Our concrete implementation of s ( i, j, ) can be broken down into three pieces: word representation, span representation, and label scoring. We discuss each of these in turn.",
"One popular way to represent words is the use of word embeddings. We have a separate embedding for each word type in the training vocabulary and map all unknown words at test time to a single <UNK> token. In addition to word embeddings, character-level representations have also been gaining traction in recent years, with common choices including recurrent, convolutional, or bag-ofn -gram representations. These alleviate the unknown word problem by working with smaller, more frequent units, and readily capture morphological information not directly accessible through word embeddings. Character LSTMs in particular have proved useful in constituency parsing (Coavoux and Crabbe, 2017), dependency parsing (Ballesteros et al., 2015), part-of-speech tagging (Ling et al., 2015a), named entity recognition (Lample et al., 2016), and machine translation (Ling et al., 2015b), making them a natural choice for our system. We obtain a character-level representation for a word by running it through a bidirectional character LSTM and concatenating the final forward and backward outputs.",
"The complete representation of a given word is the concatenation of its word embedding and its character LSTM representation. While past work has also used sparse indicator features (Finkel et al., 2008) or part-of-speech tags predicted by an external system (Cross and Huang, 2016b) for additional word-level information, we find these to be unnecessary in the presence of a robust character-level representation.",
"To build up to spans, we first run a bidirectional LSTM over the sequence of word representations for an input sentence to obtain context-sensitive forward and backward representations f i and b i for each fencepost i . We then follow past work in dependency parsing (Wang and Chang, 2016) and constituency parsing (Cross and Huang, 2016b; Stern et al., 2017) in representing the span ( i, j ) by the concatenation of the corresponding forward and backward span differences:",
"See Figure 1 for an illustration.",
"Finally, we implement the label scoring function by feeding the span representation through a one-layer feedforward network whose output dimensionality equals the number of possible labels. The score of a specific label is the corresponding component of the output vector:",
"where g is an elementwise ReLU nonlinearity.",
"Even though our model operates on n -ary trees, we can still employ a CKY-style algorithm for efficient globally optimal inference by introducing an auxiliary empty label with s ( i, j, ) = 0 for all ( i, j ) to handle spans that are not constituents. Under this scheme, every binarization of a tree with empty labels at intermediate dummy nodes will have the same score, so an arbitrary binarization can be selected at training time with no effect on learning. We contrast this with the chart parser of Stern et al. (2017), which assigns different scores to different binarizations of the same underlying tree and in theory may exhibit varying",
"conversion. With this change in place, let s best ( i, j ) denote the score of the best subtree spanning ( i, j ) . For spans of length one, we need only consider the choice of label:",
"For general spans ( i, j ) , we have the following recursion:",
"That is, we can independently select the best label for the current span and the best split point, where the score of a split is the sum of the best scores for",
"the corresponding subtrees. To parse the full sentence, we compute s best (0 , n ) using a bottom-up chart decoder, then traverse backpointers to recover the tree achieving that score. Nodes assigned the empty label are omitted during the reconstruction process to obtain the full n -ary tree. The overall complexity of this approach is O ( n 3 + Ln 2 ) , where n is the number of words and L is the total number of labels. We note that because our system does not use a grammar, there is no constant for the number of grammar rules multiplying the O ( n 3 ) term as in traditional CKY parsing. In practice, the O ( n 2 ) evaluations of the span scoring function corresponding to the O ( Ln 2 ) term dominate runtime.",
"As is common for structured prediction problems (Taskar et al., 2005), we use margin-based training to learn a model that satisfies the constraints",
"for each training example, where T denotes the gold output, T ranges over all valid trees, and is the Hamming loss on labeled spans. Our training objective is the hinge loss:",
"This is equal to 0 when all constraints are satisfied, or the magnitude of the largest margin violation otherwise.",
"Since decomposes over spans, the inner loss-augmented decode max T [ s ( T ) + ( T, T )] can be performed efficiently using a slight modifica-tion of the dynamic program used for inference. In particular, we replace s ( i, j, ) with s ( i, j, ) + 1[ 6 = ij ] , where ij is the label of span ( i, j ) in the gold tree T .",
"We use the Penn Treebank (Marcus et al., 1993) for our experiments with the standard splits of sections 2-21 for training, section 22 for development, and section 23 for testing. Details about our model hyperparameters and training prodecure can be found in Appendix A.",
"Across 10 trials, our model achieves an average development F1 score of 92.22 on section 22 of the Penn Treebank. We use this as our primary point of comparison in all subsequent analysis. The model with the best score on the development set achieves a test F1 score of 92.08 on section 23 of the Penn Treebank, exceeding the performance of other recent state-of-the-art discriminative models which do not use external data or ensembling. 1",
"Output correlations are information about compatibility between outputs in a structured prediction model. Since outputs are all a function of the input, output correlations are not necessary for prediction when a model has access to the entire input. In practice, however, many models throughout NLP have found them useful (Collins, 1997; Lafferty et al., 2001; Koo and Collins, 2010), and",
"Liang et al. (2008) provides theoretical results suggesting they may be useful for learning efficiently. In constituency parsing, there are two primary forms of output correlation typically captured by models. The first is correlations between label decisions, which often are captured by either production scores or the history in an incremental tree-creation procedure. The second, more subtle correlation comes from the enforcement of tree constraints, since the inclusion of one bracket can affect whether or not another bracket can be present. We explore these two classes of output correlations in Sections 3.1 and 3.2 below.",
"The base parser introduced in Section 2 scores labeled brackets independently then uses a dynamic program to select a set of brackets that forms the highest-scoring tree. This independent labeling is an interesting departure from classical parsing work where correlations between predicted labels played a central role. It is natural to wonder why modeling label correlations isn't as important as it once was. Is there something about the neural representation that allows us to function without it? One possible explanation is that the neural machinery, in particular the LSTM, is handling much of the reconciliation between labels that was previously handled by an inference procedure. In other words, instead of using local information to suggest several brackets and letting the grammar handle interactions between them, the LSTM may be making decisions about brackets already in its latent state, allowing it to use the result of these decisions to inform other bracketings.",
"One way to explore this hypothesis would be",
"to evaluate whether the parser's learned representations could be used to predict parent labels of nodes in the tree. If the label of a node's parent can be predicted with high accuracy from the representation of its span, then little of the information about parent-child relations provided explicitly by a grammar has been lost. For this experiment, we freeze the input and LSTM parameters of our base model and train a new label scoring network to predict the label of a span's parent rather than the label of the span itself. We only predict parent labels for spans that have a bracket in the gold tree, so that all but the top level spans will have nonempty labels. The new network is trained with a margin loss.",
"After training on the standard training sections of the treebank, the network was able to correctly predict 92.3% of parent labels on the development set. This is fairly accurate, which supports the hypothesis that the representation knows a substantial amount about surrounding context in the output tree. For comparison, given only a span's label, the best you can do for predicting the parent is 43.3% with the majority class conditioned on the current label.",
"Like other recent parsers that do not capture correlations between output labels (Cross and Huang, 2016b; Stern et al., 2017), our base parser still does have some output correlations captured by the enforcement of tree constraints.",
"In this section, we set out to determine the importance of these output correlations by making a version of the parser where they are removed.",
"Although parsers are typically designed to form trees, the bracketing F1 measure used to evaluate parsers is still defined on non-tree outputs.",
"To remove all output correlations from our parser, we can simply remove the tree constraint and independently make decisions about whether to include a bracketed span.",
"The architecture is identical to the one described in Section 2, producing a vector of label scores for each span.",
"We choose the label with the maximum score as the label for a span.",
"As before, we fix the score of the empty label at zero, so if all other label scores are negative, the span will be left out of the set of predicted brackets.",
"We train with independent margin losses for each span.",
"is 92.20, effectively matching the performance of the tree-constrained parser.",
"In addition, we find that 94.5% of predicted bracketings for development set examples form valid trees, even though we did not explicitly encourage this.",
"This high performance shows that our parser can function well even without modeling any output correlations.",
"In this section, we investigate several common choices for lexical representations of words and their role in neural parsing.",
"We compare the performance of our base model, which uses word embeddings and a character LSTM, with otherwise identical parsers that use other combinations of lexical representations.",
"The results of these experiments are summarized in Table",
"1. First, we remove the character-level representations from our model, leaving only the word embeddings.",
"We find that development performance drops from 92.22 F1 to 91.44 F1, showing that word embeddings alone do not capture suffi-cient information for state-of-the-art performance.",
"Then, we replace the character-level representations with embeddings of part-of-speech tags predicted by the Stanford tagger (Toutanova et al., 2003).",
"This model achieves a comparable development F1 score of 92.09, but unlike our base model relies on outputs from an external system.",
"Next, we train a model which includes all three lexical representations: word embeddings, character LSTM representations, and part-of-speech tag embeddings.",
"We find that development performance is nearly identical to the base model at 92.24 F1, suggesting that character representations and predicted part-of-speech tags provide much of the same information.",
"Finally, we remove word embeddings and rely completely on character-level embeddings.",
"After retuning the character LSTM size, we find that a slightly larger character LSTM can make up for the loss in word-level embeddings, giving a development F1 of 92.24.",
"Past work in constituency parsing has demonstrated that indicator features on word shapes, suf-fixes, and similar attributes provide useful infor-1003",
"mation beyond the identity of a word itself, especially for rare and unknown tokens (Finkel et al., 2008; Hall et al., 2014).",
"We hypothesize that the character-level LSTM in our model learns similar information without the need for manual supervision.",
"To test this, we take the word representations induced by the character LSTM in our parser as fixed word encodings, and train a small feedforward network to predict binary word features defined in the Berkeley Parser (Petrov and Klein, 2007).",
"We randomly split the vocabulary of the Penn Treebank into two subsets, using 80% of the word types for training and 20% for testing.",
"We find that the character LSTM representations allow for previously handcrafted indicator features to be predicted with accuracies of 99.7% or higher in all cases.",
"The fact that this simple classifier performs so well indicates that the information contained in these features is readily available from our model's character-level encodings.",
"A detailed breakdown of accuracy by feature can be found in Appendix B. 5 Context in the Sentence LSTM In this section, we analyze where the information in the sentence-level LSTM hidden vectors comes from.",
"Since the LSTM representations we use to make parsing decisions come from the fenceposts on each side of a span, we would like to understand whether they only capture information from the immediate vicinity of the fenceposts or if they also contain more distant information.",
"Although an LSTM is theoretically capable of incorporating an arbitrarily large amount of context, it is unclear how much context it actually captures and whether this context is important for parsing accuracy.",
"First, we would like to know if the LSTM features capture distant information.",
"For this experiment, we use derivatives as a measure of sensitivity to changes in an input.",
"If the derivative of a value 0 10 20 30 40 0 .",
"with respect to a particular input is high, then that input has a large impact on the final value.",
"For a particular component of an LSTM output vector, we compute its gradient with respect to each LSTM input vector, calculate the 2 -norms of the gradients, and bucket the results according to distance from the output position. This process is repeated for every output position of each sentence in the development set, and the results are averaged within each bucket. Due to the scale of the required computation, we only use a subset of the output vector components to compute the average, sampling one at random per output vector. Figure 2 illustrates how the average gradient norm is affected by the distance between the LSTM input and output. As would be expected, the closest input vectors have the largest effect on the hidden state. However, the tail of values is fairly heavy, with substantial gradient norms even for inputs 40 words away. This shows that faraway inputs do have an effect on the LSTM representation. 5.2 Truncation Analysis Next, we investigate whether information in the LSTM representation about far-away inputs is actually important for parsing performance. To do so, we remove distant context information from our span encoding, representing spans by features obtained from LSTMs that are run on fixed-sized windows of size k around each fencepost. Figure 3 illustrates this truncated representation. Since the truncated representation also removes information about the size and position of the span in addition to the context words, we learn a position-dependent cell state initialization for each of the 1004 <START> She 0 played 1 soccer 2 in 3 the 4 ( f 4 , b 4 ) park 5 . 6 <STOP> 7 <START> She 0 played 1 ( f 1 , b 1 ) soccer 2 in 3 the 4 park 5 . 6 <STOP> 7 [ f 4 f 1 , b 1 b 4 ] Figure 3: An example of creating a truncated span representation for the span played soccer in with context size k = 2 . This representation is used to investigate the importance of information far away from the fenceposts of a span. two LSTM directions to give a more fair comparison to the full LSTM. The use of a fixed-sized context window is reminiscent of prior work by Hall et al. (2014) and Durrett and Klein (2015), but here we use an LSTM instead of sparse features. We train parsers with different values of k and observe how their performance varies. All other architecture details and hyperparameters are the same as for the original model. The blue points in Figure 4 show how the context size k affects parser performance for k { 2 , 3 , 5 , 10 , 20 , 30 } . As with the derivative analysis, although most of the weight is carried by the nearby inputs, a nontrivial fraction of performance is due to context more than 10 words away. 5.3 Word Order Now that we have established that long-distance information is important for parsing performance, we would like to know whether the order of the far-away words is important. Is the LSTM capturing far-away structure, or is the information more like a bag-of-words representation summarizing the words that appear? To test the importance of order, we train a parser where information about the order of far-away words is destroyed. As illustrated in Figure 5, we run a separate LSTM over the entire sentence for each fencepost, shuffling the input depending on the particular fencepost being represented. We randomly shuffle words outside a context window 0 10 20 30 89 90 91 92 Context Window D e v e l op m e n t F 1 Truncated Shuffled Figure 4: Development F1 as the amount of context given to the sentence-level LSTM varies. The blue points represent parser performance when the LSTM is truncated to a window around the fenceposts, showing that far-away context is important. The orange points represent performance when the full context is available but words outside a window around the fenceposts are shuffled, showing that the order of far-away context is also important. of size k around the fencepost of interest, keeping words on the left and the right separate so that directional information is preserved but exact positions are lost. The orange points in Figure 4 show the performance of this experiment with different context sizes k . We observe that including shuffled distant words is substantially better than truncating them completely. On the other hand, shuffling does cause performance to degrade relative to the base parser even when the unshuffled win-1005 <START> 0 played 1 She 2 soccer 3 in 4 the ( f 4 , b 4 ) 5 park 6 . 7 <STOP> <START> 0 She 1 played ( f 1 , b 1 ) 2 soccer 3 park 4 . 5 the 6 in 7 <STOP> [ f 4 f 1 , b 1 b 4 ] Figure 5: An example of creating a shuffled span representation for the span played soccer in with context size k = 2 . The light blue words are outside the context window and are shuffled randomly. Shuffled representations are used to explore whether the order of far-away words is important. dow is moderately large, indicating that the LSTM is propagating information that depends on the order of words in far-away positions. 5.4 LSTMs vs. Feedforward Finally, we investigate whether the LSTM architecture itself is important for reasons other than just the amount of context it can capture. Like any architecture, the LSTM introduces particular inductive biases that affect what gets learned, and these could be important for parser performance. We run a version of the truncation experiment from Section 5.2 where we use a feedforward network in place of a sentence-level LSTM to process the surrounding context of each fencepost. The input to the network is the concatenation of the word representations that would be used as inputs for the truncated LSTM, and the output is a vector of the same size as the LSTM-based representation. As in Section 5.2, we wish to give our representation information about span size and position, so we also include a learned fencepost position embedding in the concatenated inputs to the network. We focus on context window size k = 3 for this experiment. We search among networks with one, two, or three hidden layers that are one, two, or four times the size of the LSTM hidden state. Of all the feedforward networks tried, the maximum development performance was 83.39 F1, compared to 89.92 F1 for the LSTM-based truncation. This suggests that some property of the LSTM makes it better suited for the task of summarizing context than a flat feedforward network. 6 Related Analysis Work Here we review other works that have performed similar analyses to ours in parsing and other areas of NLP. See Section 2 for a description of how our parser is related to other parsers. Similar to our independent span prediction in Section 3.2, several works have found that their models still produce valid outputs for the majority of inputs even after relaxing well-formedness constraints. In dependency parsing, Zhang et al. (2017) and Chorowski et al. (2016) found that selecting dependency heads independently often resulted in valid trees for their parsers (95% and 99.5% of outputs form trees, respectively). In constituency parsing, the parser of Vinyals et al. (2015), which produced linearized parses token by token, was able to output valid constituency trees for the majority of sentences (98.5%) even though it was not constrained to do so. Several other works have investigated what information is being captured within LSTM representations. Chawla et al. (2017) performed analysis of bidirectional LSTM representations in the context of named entity recognition. Although they were primarily interested in finding specific word types that were important for making decisions, they also analyzed how distance affected a word's impact. Shi et al. (2016) and Linzen et al. 1006 (2016) perform analysis of LSTM representations in machine translation and language modeling respectively to determine whether syntactic information is present. Some of their techniques involve classification of features from LSTM hidden states, similar to our analysis in Sections 3.1 and 4.2. In Section 5.4, we found that replacing an LSTM with a feedforward network hurt performance. Previously, Chelba et al. (2017) had similar findings in language modeling, where using LSTMs truncated to a particular distance improved performance over feedforward networks that were given the same context. 7 Conclusion In this paper, we investigated the extent to which information provided directly by model structure in classical constituency parsers is still being captured by neural methods. Because neural models function in a substantially different way than classical systems, it could be that they rely on different information when making their decisions. Our findings suggest that, to the contrary, the neural systems are learning to capture many of the same knowledge sources that were previously provided, including the parent-child relations encoded in grammars and the word features induced by lexicons. Acknowledgments This work is supported by the DARPA Explainable Artificial Intelligence (XAI) program and the UC Berkeley Savio computational cluster. The second author is supported by an NSF Graduate Research Fellowship. References Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by modeling characters instead of words with lstms. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing . Association for Computational Linguistics, pages 349359. https://doi.org/10.18653/ v1/D15-1041 . Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05) ."
] | [
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.